Efficiently Loading Paginated Results From MySQL with Just One Query Per Page

There are many sit­u­a­tions in which pag­i­na­tion of query results is very use­ful, espe­cial­ly for per­for­mance opti­miza­tion.  In most of the­se kinds of sit­u­a­tions, the pag­i­nat­ing of results requires you to deter­mine the total num­ber of results, so the appli­ca­tion knows the num­ber of pages avail­able.

The most com­mon way to do this, is to use two queries; one which obtains the count of results and one which obtains a par­tic­u­lar page of results.  This method has the poten­tial to be even less opti­mal than load­ing the entire set of results, how­ev­er; due to the fact that two queries are now nec­es­sary, where­as before there was only one query.

In most data­base sys­tems, it is pos­si­ble to over­come this lim­i­ta­tion; though the tech­nique is speci­fic to the par­tic­u­lar data­base you are using.  This exam­ple explains how to do this in MySQL.

Here is the sub-optimal exam­ple, using two queries; it is load­ing the first page of results. The LIMIT 0, 40 means it will start at posi­tion 0 (the begin­ning) and obtain a set of 40 results.

SELECT count(*) FROM my_table WHERE timestamp >= '2010-03-15' and timestamp <= '2010-08-01';
 
SELECT id, timestamp, field1, field2, field3 FROM my_table WHERE timestamp >= '2010-03-15' and timestamp <= '2010-08-01' ORDER BY id LIMIT 0, 40;

Here is a more opti­mal exam­ple, which uses two state­ments, only one of which is a real query.  Every­thing is done dur­ing the first state­ment, the sec­ond state­ment mere­ly loads the count, which was cal­cu­lat­ed dur­ing the first state­ment.

SELECT SQL_CALC_FOUND_ROWS id, timestamp, field1, field2, field3 FROM my_table WHERE timestamp >= '2010-03-15' and timestamp <= '2010-08-01' ORDER BY id LIMIT 0, 40;
 
SELECT found_rows() AS cnt;

One of the draw­backs of SQL_CALC_FOUND_ROWS, or count(*) in gen­er­al; is the fact that, by run­ning the­se cal­cu­la­tions, you lose some of the ben­e­fit of pag­i­na­tion. This is because your data­base is required to exam­ine all of the effect­ed data in order to gen­er­ate an accu­rate count.

Depend­ing on the specifics of your my.cnf con­fig­u­ra­tion file, the first state­ment will cache part of the infor­ma­tion, caus­ing it to exe­cute faster when sub­se­quent pages are load­ed. In some of my own test­ing I have seen a sig­nif­i­cant speedup after the first page is load­ed.

If you want to get the most out of your appli­ca­tion, you will like­ly need to do a com­bi­na­tion of appli­ca­tion tun­ing, query tun­ing and data­base tun­ing. Gen­er­al­ly you will want to start by tun­ing the appli­ca­tion itself; this way you’re elim­i­nat­ing any bot­tle­necks inher­ent in the appli­ca­tion. Then you’ll need to do some query opti­miza­tions if the appli­ca­tion still isn’t fast enough. Final­ly, you’ll want to look into what con­fig­u­ra­tion changes you can make on the data­base in order to speed things up from the source.

Bash Shell Script to Convert All MySQL Tables to use the Innodb Engine

The fol­low­ing, when run in the Bash shell, will con­vert the engine for all of your MySQL tables to Inn­oDB.

mysql -B -N -e "SHOW TABLES" -u  <username> --password=<password> <databasename> | while read table; \
do \
     echo "+ Converting Table $table"; \
     mysql -B -N -e "alter table $table engine=innodb" -u <username> --password=<password> <databasename>; \ 
done;

Often, if you have a heav­i­ly used data­base, you will want to con­sid­er the Inn­oDB engine. By default, the MySQL data­base engine is MyISAM. How­ev­er, the Inn­oDB engine has many advan­tages; par­tic­u­lar­ly for high uti­liza­tion envi­ron­ments. With Inn­oDB data-integrity is main­tained through­out the entire query process because it is trans­ac­tion safe. 

Inn­oDB also pro­vides row-locking, as opposed to table-locking. With row-locking, while one query is busy updat­ing or insert­ing a row, anoth­er query can update a dif­fer­ent row at the same time. Hence Inn­oDB has supe­ri­or fea­tures for multi-user con­cur­ren­cy and per­for­mance.

Locating Duplicate Entries In A MySQL Table

If you don’t have unique index­es on your table, it is like­ly that you will occa­sion­al­ly have entries that are dupli­cat­ed. This can often hap­pen because of a soft­ware bug or pos­si­bly a user error. Some appli­ca­tions even choose not to have unique index­es for per­for­mance rea­sons; though this hap­pens at the cost of data integri­ty.

The best I know of, to demon­strate how to locate the dupli­cate entries, is to use an exam­ple. So let’s say you have the fol­low­ing table:

mysql> desc result;
+-------------------------+--------------+------+-----+---------+----------------+
| Field                   | Type         | Null | Key | Default | Extra          |
+-------------------------+--------------+------+-----+---------+----------------+
| id                      | bigint(20)   | NO   | PRI | NULL    | auto_increment |
| comment                 | varchar(255) | YES  |     | NULL    |                |
| key1_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| str1                    | varchar(255) | YES  | MUL | NULL    |                |
| result                  | float        | YES  |     | NULL    |                |
+-------------------------+--------------+------+-----+---------+----------------+

In this exam­ple, let’s say you want the fol­low­ing field set to have a unique com­bi­na­tion of val­ues. In oth­er words, you want the­se fields to be a sort of com­pos­ite key; so that there are no two records in the data­base which have the same val­ues for the­se three fields.

| key1_id             | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id             | bigint(20)   | YES  | MUL | NULL    |                |
| str1                | varchar(255) | YES  | MUL | NULL    |                |

Well, let’s say that you are con­cerned about hav­ing dupli­cate val­ues in your data. You need to have an effec­tive way of query­ing the data­base in order to find the dupli­cate records. The fol­low­ing query would locate dupli­cates of the above field-set.

mysql> select id, str1, key1_id, key2_id, count(*)
from test_result group by str1,key1_id,key2_id
having count(*) > 1;

In the exam­ple you group by the list of fields that you want to use as your com­pos­ite key and select the fields you want to see in your out­put. The above out­put looked like the fol­low­ing:

+-------+----------+--------------+-------------+----------+
| id    | str1     | key1_id      | key2_id | count(*) |
+-------+----------+--------------+-------------+----------+
| 17293 | EDDP     |         1951 |        1923 |        4 |
| 17302 | GRAV     |         1951 |        1923 |        4 |
| 17294 | MTDN     |         1951 |        1923 |        4 |
| 17301 | OXID     |         1951 |        1923 |        4 |
| 17303 | PH       |         1951 |        1923 |        4 |
| 17288 | XTC      |         1951 |        1923 |        4 |
+-------+----------+--------------+-------------+----------+
6 rows in set (0.52 sec)

What we have in this exam­ple is four of each of the­se records which are sup­posed to be unique.

Eliminating Duplicate Records From MySQL Tables

Any­one who works with data­base dri­ven devel­op­ment to any extent, will occa­sion­al­ly run into a sit­u­a­tion where dupli­cate infor­ma­tion is added to their data­base. I have per­son­al­ly run into such a prob­lem on many occa­sions since I start­ed work­ing with data­base dri­ven soft­ware.

Being able to quick­ly undo data cor­rup­tion is extreme­ly impor­tant in pro­duc­tion data­bas­es. As much as we like to have a per­fect pro­duc­tion envi­ron­ment; it is often very dif­fi­cult to do ded­i­cate the time and resources nec­es­sary to avoid all mis­takes.

The fol­low­ing exam­ple is a great way to remove dupli­cates, based upon a set of fields. Basi­cal­ly you cre­ate an unique index on a list of fields (com­pos­ite key) of your spec­i­fi­ca­tion.

Let’s say you have the fol­low­ing table:

mysql> desc result;
+-------------------------+--------------+------+-----+---------+----------------+
| Field                   | Type         | Null | Key | Default | Extra          |
+-------------------------+--------------+------+-----+---------+----------------+
| id                      | bigint(20)   | NO   | PRI | NULL    | auto_increment |
| comment                 | varchar(255) | YES  |     | NULL    |                |
| key1_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| str1                    | varchar(255) | YES  | MUL | NULL    |                |
| result                  | float        | YES  |     | NULL    |                |
+-------------------------+--------------+------+-----+---------+----------------+

In this exam­ple, the fields which you do not want dupli­cat­ed are as fol­lows:

| key1_id              | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id              | bigint(20)   | YES  | MUL | NULL    |                |
| str1                 | varchar(255) | YES  | MUL | NULL    |                |

In remove dupli­cates on those three fields, to where there is only one record in the data­base with the same val­ues for each of those three fields, you would do the fol­low­ing:

alter ignore table result add unique index unique_result (key1_id,key2_id,str1);

I ran this on one of my data­bas­es and got the fol­low­ing:

Query OK, 12687876 rows affected (21 min 13.74 sec)
Records: 12687876  Duplicates: 1688  Warnings: 0

In this exam­ple, we removed 1688 dupli­cate records based upon the three spec­i­fied fields.