MySql has a default database with the name and . These two databases are useful for performance tuning of the MySQL database.
Enable slow query logs
SET GLOBAL slow_query_log=1;
SET GLOBAL log_output='FILE';
SET GLOBAL slow_query_log_file='slowqueries.log';
SET GLOBAL long_query_time=5.0;
Global variable name | Possible values | Description of possible values |
---|---|---|
slow_query_log | 0,1 |
0: Slow query logs are disabled 1: Slow query logs are enabled. When slow query logs are enabled it will check the value of the variable to write the logs. |
long_query_time | Time in seconds, the default value is 10 seconds. | To prevent fast-running queries from being logged in the slow query log, specify a value for the shortest query run time to be logged, in seconds. |
log_output | FILE, TABLE |
TABLE: This will write slow query logs to table and general log to which is present in the database. FILE: This will write the logs to the location specified by the variable . |
Enable query logs that don't use indexes
SET GLOBAL log_queries_not_using_indexes=ON;
Global variable name | Possible values | Description |
---|---|---|
log_queries_not_using_indexes | 0,1 |
0: Logs will be disabled for the queries that do not use indexes. 1: Logs for the queries that do not use indexes are enabled and those logs will be stored along with the slow query logs. |
SET GLOBAL min_examined_row_limit=100000;
Global variable name | Possible values | Description |
---|---|---|
min_examined_row_limit | Integer Number | To log queries that examine a minimum number of rows regardless of their execution time. |
Slow query logs contains the below fields
Query classification | Identification field |
---|---|
Full table scan | sql_text without and clause |
Slow query | query_time |
For example, let’s create a users table using
CREATE TABLE users (
id int NOT NULL AUTO_INCREMENT,
uuid int,
LastName varchar(255),
FirstName varchar(255),
Tenant varchar(255) NOT NULL
PRIMARY KEY (id)
);
In the users' table, we have an integer id as a primary key and another unique identifier as an uuid field.
Missing indices
SELECT uuid FROM users WHERE uuid IS NOT NULL;
This is the case where we can create an index on the column,
CREATE INDEX indexname ON users (uuid);
Unoptimized indices - use composite indices
SELECT uuid FROM users WHERE uuid IS NULL AND tenant = "Tenant";
This is the case where we can use multiple-column indexes.
MySQL can use multiple-column indexes for queries that test all the columns in the index, or queries that test just the first column, the first two columns, and so on. If you specify the columns in the right order in the index definition, a single composite index can speed up several kinds of queries on the same table.
CREATE INDEX multicolumnindexname ON users (uuid, tenant);
The index is an index over the and columns. The index can be used for lookups in queries that specify values in a known range for combinations of and values. It can also be used for queries that specify just a value because that column is the leftmost prefix of the index. Therefore, the index is used for lookups in the following queries:
SELECT * FROM users where uuid IS NULL AND tenant = "tenant";
SELECT * FROM users where uuid = 123;
SELECT * FROM users
WHERE uuid is NOT NULL
AND (tenant='Tenant 1' OR tenant='Tenant 2');
But can not be used for the following queries,
SELECT * FROM users where tenant = "tenant";
SELECT * FROM user
WHERE uuid = 123
OR tenant = "tenant"
As an alternative to a composite index, you can introduce a column that is “hashed” based on information from other columns. If this column is short, reasonably unique, and indexed, it might be faster than a “wide” index on many columns. In MySQL, it is very easy to use this extra column
SELECT * FROM tbl_name
WHERE hash_col=MD5(CONCAT(val1,val2))
AND col1=val1 AND col2=val2;
SELECT * FROM users;
When this type of query is executed they examine all the rows of the table and return a large amount of data when the total number of rows in your table is high.
To avoid this full row examination we should execute these queries with limits like
SELECT * FROM users limit 500;
If you are using AWS RDS as your database server then you can use AWS cloudwatch to see the slow query logs.
Sending database logs to AWS cloudwatch logs
To create a dashboard for the slow query logs we can search the log using the following query
parse @message "Query_time: * Lock_time: * Rows_sent: * Rows_examined: *\n*" as Query_time,Lock_time,Rows_sent,Rows_examined,query
| filter Query_time > 0.1
| sort Query_time desc
Demo dashboard created using the above query