When it comes to determining what goes where from the server, Meteor does some pretty cool optimisations to be fast and effective. Yet, I’ve found in certain special cases - especially if you’re deploying to RAM-tight environments such as Heroku or DigitalOcean - it can be surprisingly easy to kill your server if you’re not a bit careful.

Use case: subscribing to a list of changes

Imagine you’re implementing an offline-enabled app and have a collection which contains a list of changes since the client last connected. When the client connects, it subscribes to the collection and you start inserting documents which the client drains. Every time it gets a document, it removes it from local minimongo so that the removal is propagated to the database on the server.

Now, say you have a couple hundred documents in your changes collection (for example because the client disconnected before it managed to pull all the changes, so there are some leftovers from the last sync) and you subscribe using something like this:

return changes.find({userId: ‘<myUserId>’});  

Now, I've found when you start generating the documents and pushing them into the changes collection, Meteors CPU and memory usage just skyrockets. I've been monitoring the server with kadira and the memory usage grows from 150MB to 1500MB within a couple of minutes taking the heroku server - which usually runs with 0.5 or 1 GB RAM - down.

Here's a snapshot of my console (memwatch logs):

What happens? Well, hard to say for sure as Meteor's code dealing with data synchronisation is quite complex but my theory is if you are subscribed to a collection and your cursor doesn’t have a limit (meaning everything matching the query is to be sent down the wire), every time you add a new document to a collection, Meteor stores a snapshot of the cursor's contents. Then, when you add a new document, Meteor stores a snapshot of the new set. When you add another one, again. And again and again. With each document added, another chunk of RAM is used. Mind you - this is not a memory leak per se: the memory is eventually freed; there’s just a good chance your server will die before it happens.

How to prevent this? Super easy:

return changes.find({userId: ‘<myUserId>’}, {limit: 10});  

That way, Meteor will only snapshot the first 10 documents in the collection and as the client drains the data, the cursor “window” will move. Memory impact: negligible.

Moral of the tale

Simple enough: always remember to limit your published cursors. Or even better, use some sort of pagination that handles that for you. And use some monitoring tool - such as kadira - to see when the figurative shit hits the fan (pardon my French).

Happy meteoring! I’m @tomas_brambora.