Joining Collections in MongoDB Queries using $lookup

Note: This only works in Mon­goDB 3.2 or lat­er, be sure to update if you need this func­tion­al­i­ty!

In sit­u­a­tions where you have an Objec­tID in a col­lec­tion and you want it resolved by Mon­goDB dur­ing your query, you can accom­plish this with aggre­gate and lookup.

Let’s say we had two col­lec­tions: insur­ance­Claim and insur­an­ce­Provider.

Here’s a stripped down exam­ple of insur­ance­Claims

{
    _id: ObjectId("5849ca6b7a5385ddcb2ea422"),
    insurer: ObjectId("584998f690b755f6a9fc3750"),
    filed: false
}

Here’s a stripped down exam­ple of insur­ers:

{
    _id: ObjectId("584998f690b755f6a9fc3750"),
    name: 'foo'
}

If we want to have Mon­goDB resolve the insur­er when we query the claims, we can do so with the fol­low­ing query:

db.getCollection('insuranceClaims')
    .aggregate(
      [
        {
          "$lookup": {
            "from": "insurers", 
            "localField": "insurer", 
            "foreignField": "_id", 
            "as": "insurer_loaded"
          }
        }
      ]
    );

This would result in the fol­low­ing out­put:

{
    _id: ObjectId("5849ca6b7a5385ddcb2ea422"),
    insurer: ObjectId("584998f690b755f6a9fc3750"),
    filed: false,
    insurer_loaded: {
        _id: ObjectId("584998f690b755f6a9fc3750"),
        name: 'foo'
    }
}

Now we’ve effec­tive­ly had mon­go resolve the insur­ers for us!

If you have insur­ers in an array instead, there’s anoth­er step nec­es­sary.

{
    _id: ObjectId("5849ca6b7a5385ddcb2ea422"),
    filed: false,
    insurers: [{ObjectId("584998f690b755f6a9fc3750")}]
}

To accom­plish the same in this sit­u­a­tion we’ll use $unwind on the array.

db.getCollection('insuranceClaims')
  .aggregate(
    [
      {"$unwind": "$insurers"},
      {
        "$lookup": {
          "from": "insurers", 
          "localField": "insurers", 
          "foreignField": "_id", 
          "as": "insurer_loaded"
        }
      }
    ]
  );

This would pro­duce the fol­low­ing out­put:

{
    _id: ObjectId("5849ca6b7a5385ddcb2ea422"),
    insurer: ObjectId("584998f690b755f6a9fc3750"),
    filed: false,
    insurer_loaded: [{
        _id: ObjectId("584998f690b755f6a9fc3750"),
        name: 'foo'
    }]
}

Now that you’ve joined up the col­lec­tions, you prob­a­bly want to add in some fil­ters, to nar­row the list down to exact­ly what you want. To add a query into the mix sim­ply put the query into $match as fol­lows. This query will load up claims where the field filed is false.

  .aggregate(
    [
      {"$match": {"filed": false}},
      {"$unwind": "$insurers"},
      {
        "$lookup": {
          "from": "insurers", 
          "localField": "insurers", 
          "foreignField": "_id", 
          "as": "insurer_loaded"
        }
      }
    ]
  );

Deleting Rows From a Table Referenced in the Query

If you do a fair amount of SQL, then every now and then, you’ll like­ly run into a sit­u­a­tion where you need to delete some­thing from a table; but need to also ref­er­ence that table in the query as well.

The trou­ble is MySQL won’t let you do this, in many cir­cum­stances. For­tu­nate­ly, there is a sim­ple workaround.

delete from tableA 
 where id in (
   select b.id 
   from tableB b 
   join tableA a 
     on a.tableb_id = b.id
   );

The above query would throw an error sim­i­lar to the fol­low­ing:

ERROR 1093 (HY000): You can't specify target table 'tableA' for update in FROM clause

We can skirt this lim­i­ta­tion by nest­ing the sub­s­e­lect inside of anoth­er select state­ment. Then it will work just fine.

delete from tableA where id in (
  select aId from (
    select b.id as bId from tableB b join tableA a 
    on a.tableb_id = b.id 
  ) as apt
);

You’ll get out­put to indi­cate that the query is suc­cess­ful.

Query OK, 183 rows affected (0.53 sec)

This saved me a bunch of time and kept me from hav­ing to rework my query com­plete­ly, so I am shar­ing it with you!

Using MySQL Queries to Dump/Format Data to Delimited Files

It is often use­ful to uti­lize MySQL queries, to export data from tables then write this data into files. MySQL offers the abil­i­ty to cus­tomize many of the details of the­se exports, which should cov­er most sit­u­a­tions.

In this exam­ple, we are select­ing data from a table, apply­ing some for­mat­ting and writ­ing it to a file.

SELECT code, name, REPLACE(REPLACE(REPLACE(comment,"\r\n\r\n",'||'),"\r\n",'||'),"\n",'||') 
INTO OUTFILE '/tmp/20111222_practice_comments.csv' 
FIELDS TERMINATED BY ','  
OPTIONALLY ENCLOSED BY '"' 
ESCAPED BY '\\' 
LINES TERMINATED BY '\n' 
FROM practice_table;

Con­tin­ue read­ing “Using MySQL Queries to Dump/Format Data to Delim­it­ed Files”

Efficiently Loading Paginated Results From MySQL with Just One Query Per Page

There are many sit­u­a­tions in which pag­i­na­tion of query results is very use­ful, espe­cial­ly for per­for­mance opti­miza­tion.  In most of the­se kinds of sit­u­a­tions, the pag­i­nat­ing of results requires you to deter­mine the total num­ber of results, so the appli­ca­tion knows the num­ber of pages avail­able.

The most com­mon way to do this, is to use two queries; one which obtains the count of results and one which obtains a par­tic­u­lar page of results.  This method has the poten­tial to be even less opti­mal than load­ing the entire set of results, how­ev­er; due to the fact that two queries are now nec­es­sary, where­as before there was only one query.

In most data­base sys­tems, it is pos­si­ble to over­come this lim­i­ta­tion; though the tech­nique is speci­fic to the par­tic­u­lar data­base you are using.  This exam­ple explains how to do this in MySQL.

Here is the sub-optimal exam­ple, using two queries; it is load­ing the first page of results. The LIMIT 0, 40 means it will start at posi­tion 0 (the begin­ning) and obtain a set of 40 results.

SELECT count(*) FROM my_table WHERE timestamp >= '2010-03-15' and timestamp <= '2010-08-01';
 
SELECT id, timestamp, field1, field2, field3 FROM my_table WHERE timestamp >= '2010-03-15' and timestamp <= '2010-08-01' ORDER BY id LIMIT 0, 40;

Here is a more opti­mal exam­ple, which uses two state­ments, only one of which is a real query.  Every­thing is done dur­ing the first state­ment, the sec­ond state­ment mere­ly loads the count, which was cal­cu­lat­ed dur­ing the first state­ment.

SELECT SQL_CALC_FOUND_ROWS id, timestamp, field1, field2, field3 FROM my_table WHERE timestamp >= '2010-03-15' and timestamp <= '2010-08-01' ORDER BY id LIMIT 0, 40;
 
SELECT found_rows() AS cnt;

One of the draw­backs of SQL_CALC_FOUND_ROWS, or count(*) in gen­er­al; is the fact that, by run­ning the­se cal­cu­la­tions, you lose some of the ben­e­fit of pag­i­na­tion. This is because your data­base is required to exam­ine all of the effect­ed data in order to gen­er­ate an accu­rate count.

Depend­ing on the specifics of your my.cnf con­fig­u­ra­tion file, the first state­ment will cache part of the infor­ma­tion, caus­ing it to exe­cute faster when sub­se­quent pages are load­ed. In some of my own test­ing I have seen a sig­nif­i­cant speedup after the first page is load­ed.

If you want to get the most out of your appli­ca­tion, you will like­ly need to do a com­bi­na­tion of appli­ca­tion tun­ing, query tun­ing and data­base tun­ing. Gen­er­al­ly you will want to start by tun­ing the appli­ca­tion itself; this way you’re elim­i­nat­ing any bot­tle­necks inher­ent in the appli­ca­tion. Then you’ll need to do some query opti­miza­tions if the appli­ca­tion still isn’t fast enough. Final­ly, you’ll want to look into what con­fig­u­ra­tion changes you can make on the data­base in order to speed things up from the source.

Bash Shell Script to Convert All MySQL Tables to use the Innodb Engine

The fol­low­ing, when run in the Bash shell, will con­vert the engine for all of your MySQL tables to Inn­oDB.

mysql -B -N -e "SHOW TABLES" -u  <username> --password=<password> <databasename> | while read table; \
do \
     echo "+ Converting Table $table"; \
     mysql -B -N -e "alter table $table engine=innodb" -u <username> --password=<password> <databasename>; \ 
done;

Often, if you have a heav­i­ly used data­base, you will want to con­sid­er the Inn­oDB engine. By default, the MySQL data­base engine is MyISAM. How­ev­er, the Inn­oDB engine has many advan­tages; par­tic­u­lar­ly for high uti­liza­tion envi­ron­ments. With Inn­oDB data-integrity is main­tained through­out the entire query process because it is trans­ac­tion safe. 

Inn­oDB also pro­vides row-locking, as opposed to table-locking. With row-locking, while one query is busy updat­ing or insert­ing a row, anoth­er query can update a dif­fer­ent row at the same time. Hence Inn­oDB has supe­ri­or fea­tures for multi-user con­cur­ren­cy and per­for­mance.

Locating Duplicate Entries In A MySQL Table

If you don’t have unique index­es on your table, it is like­ly that you will occa­sion­al­ly have entries that are dupli­cat­ed. This can often hap­pen because of a soft­ware bug or pos­si­bly a user error. Some appli­ca­tions even choose not to have unique index­es for per­for­mance rea­sons; though this hap­pens at the cost of data integri­ty.

The best I know of, to demon­strate how to locate the dupli­cate entries, is to use an exam­ple. So let’s say you have the fol­low­ing table:

mysql> desc result;
+-------------------------+--------------+------+-----+---------+----------------+
| Field                   | Type         | Null | Key | Default | Extra          |
+-------------------------+--------------+------+-----+---------+----------------+
| id                      | bigint(20)   | NO   | PRI | NULL    | auto_increment |
| comment                 | varchar(255) | YES  |     | NULL    |                |
| key1_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| str1                    | varchar(255) | YES  | MUL | NULL    |                |
| result                  | float        | YES  |     | NULL    |                |
+-------------------------+--------------+------+-----+---------+----------------+

In this exam­ple, let’s say you want the fol­low­ing field set to have a unique com­bi­na­tion of val­ues. In oth­er words, you want the­se fields to be a sort of com­pos­ite key; so that there are no two records in the data­base which have the same val­ues for the­se three fields.

| key1_id             | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id             | bigint(20)   | YES  | MUL | NULL    |                |
| str1                | varchar(255) | YES  | MUL | NULL    |                |

Well, let’s say that you are con­cerned about hav­ing dupli­cate val­ues in your data. You need to have an effec­tive way of query­ing the data­base in order to find the dupli­cate records. The fol­low­ing query would locate dupli­cates of the above field-set.

mysql> select id, str1, key1_id, key2_id, count(*)
from test_result group by str1,key1_id,key2_id
having count(*) > 1;

In the exam­ple you group by the list of fields that you want to use as your com­pos­ite key and select the fields you want to see in your out­put. The above out­put looked like the fol­low­ing:

+-------+----------+--------------+-------------+----------+
| id    | str1     | key1_id      | key2_id | count(*) |
+-------+----------+--------------+-------------+----------+
| 17293 | EDDP     |         1951 |        1923 |        4 |
| 17302 | GRAV     |         1951 |        1923 |        4 |
| 17294 | MTDN     |         1951 |        1923 |        4 |
| 17301 | OXID     |         1951 |        1923 |        4 |
| 17303 | PH       |         1951 |        1923 |        4 |
| 17288 | XTC      |         1951 |        1923 |        4 |
+-------+----------+--------------+-------------+----------+
6 rows in set (0.52 sec)

What we have in this exam­ple is four of each of the­se records which are sup­posed to be unique.

Eliminating Duplicate Records From MySQL Tables

Any­one who works with data­base dri­ven devel­op­ment to any extent, will occa­sion­al­ly run into a sit­u­a­tion where dupli­cate infor­ma­tion is added to their data­base. I have per­son­al­ly run into such a prob­lem on many occa­sions since I start­ed work­ing with data­base dri­ven soft­ware.

Being able to quick­ly undo data cor­rup­tion is extreme­ly impor­tant in pro­duc­tion data­bas­es. As much as we like to have a per­fect pro­duc­tion envi­ron­ment; it is often very dif­fi­cult to do ded­i­cate the time and resources nec­es­sary to avoid all mis­takes.

The fol­low­ing exam­ple is a great way to remove dupli­cates, based upon a set of fields. Basi­cal­ly you cre­ate an unique index on a list of fields (com­pos­ite key) of your spec­i­fi­ca­tion.

Let’s say you have the fol­low­ing table:

mysql> desc result;
+-------------------------+--------------+------+-----+---------+----------------+
| Field                   | Type         | Null | Key | Default | Extra          |
+-------------------------+--------------+------+-----+---------+----------------+
| id                      | bigint(20)   | NO   | PRI | NULL    | auto_increment |
| comment                 | varchar(255) | YES  |     | NULL    |                |
| key1_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id                 | bigint(20)   | YES  | MUL | NULL    |                |
| str1                    | varchar(255) | YES  | MUL | NULL    |                |
| result                  | float        | YES  |     | NULL    |                |
+-------------------------+--------------+------+-----+---------+----------------+

In this exam­ple, the fields which you do not want dupli­cat­ed are as fol­lows:

| key1_id              | bigint(20)   | YES  | MUL | NULL    |                |
| key2_id              | bigint(20)   | YES  | MUL | NULL    |                |
| str1                 | varchar(255) | YES  | MUL | NULL    |                |

In remove dupli­cates on those three fields, to where there is only one record in the data­base with the same val­ues for each of those three fields, you would do the fol­low­ing:

alter ignore table result add unique index unique_result (key1_id,key2_id,str1);

I ran this on one of my data­bas­es and got the fol­low­ing:

Query OK, 12687876 rows affected (21 min 13.74 sec)
Records: 12687876  Duplicates: 1688  Warnings: 0

In this exam­ple, we removed 1688 dupli­cate records based upon the three spec­i­fied fields.