Kadira - Performance Monitoring for Meteor (you should try this)

Does Meteor Scale?

Most people considering Meteor for a production deployment are wondering if it can scale. Some say Meteor is a good framework for prototyping, but not for production. After you read this article, you’ll be able to decide for yourself.

Scaling Myth

With the emergence of cloud computing (and specifically Heroku), many have come under the impression that scaling means adding more instances of an application. That’s not completely true.

Routing, sessions, job processing, static file serving, and database optimization are key concerns when it comes to scaling, among other things. Any Meteor app that intends to deploy into production needs to address–or at least consider–these issues. On top of that, Meteor has its own issues.

Issues Unique to Meteor

Meteor isn’t like Ruby on Rails, Express (the Node.js framework), or PHP. It’s a special framework with its own issues, many of which have been solved outside of the Meteor community already. We just need to figure out how to fit them together and apply them correctly to Meteor. So, let’s have a look at these issues first.

Use of SockJS

Meteor uses SockJS as the transport layer for communicating between client and server. SockJS needs sticky sessions, which means that all the requests for a particular client must be served from a single server for a specific amount of time.

Hot Code Reloading

Meteor apps are single-page apps, which are long-lived in the browser. Single-page apps can sometimes experience conflicts between the server and client due to a version mismatch–especially when new code is pushed often, which is not so rare these days. Fortunately, Meteor features hot code reload, which identifies any code changes on the server and asks the browser to reload. Additionally, Meteor is smart enough to preserve session information through the reload.

Due to the default hot code reload logic, a Meteor client app needs to connect (via DDP) to the same server it was loaded from.

Polling for Changes in MongoDB

Meteor is all real-time, which it currently(by default) achieves by fetching and comparing documents after every database write operation. Meteor also polls the database for changes every 10 seconds. These are the main bottlenecks when scaling Meteor, and they introduce two main issues:

  1. The polling and comparing logic takes a lot of CPU power and network I/O.
  2. After a write operation, there is no way to propagate changes to other Meteor instances in real-time. Changes will only be noticed the next time Meteor polls (~10 seconds).

See how Meteor identifies database changes with polling: Real-time Changes with MongoDB Polling

How We Can Solve Meteor’s Scaling Issues

There is nothing which cannot be solved. Continue reading to learn how to solve the above-mentioned issues with Meteor, and some other common scaling issues. The next few articles will discuss the full implementation of these solutions.

Sticky Session Enabled Load Balancing

When serving requests to Meteor instances, a load balancer needs to be able to handle the issues presented by SockJS and hot code reload. These issues are not hard to solve so long as the load balancer can be configured to use sticky sessions–and the sticky sessions need to apply to static content, at least for the first HTML page.

Meteor with MongoDB oplog

As mentioned above, Meteor’s bottleneck to scaling exists in the way it polls the database and runs an algorithm to detect and patch in any changes to the rest of the app. But we can get much more performance out of Meteor using the MongoDB oplog. An oplog-based solution works with multiple Meteor instances without much effort. You can even write directly to the database outside of Meteor, and Meteor will notice the changes.

Oplog integration removes the bottleneck of MongoDB polling.

MongoDB’s oplog is a log that records every database write operation. It’s so reliable that it’s used for MongoDB replication (Replica Sets). Also, the oplog is a capped collection, which can be configured to have a fixed size and can be tailed. Oplog tailing can be used to capture database changes in real-time.

See following illustration to see how Meteor gets database changes with the oplog. Real-time Changes with oplog

Smart Caching

It’s a good idea to put a caching server in front of Meteor. This will reduce the load on Meteor to serve static content. Node.js (where Meteor runs) does not work well when it comes to static file serving, so using a caching server improves performance.

The caching server shouldn’t cache the first HTML page loaded from Meteor into the browser. This is the only HTML content Meteor loads, and it contains a JavaScript variable called serverId that is used to compare versions in hot code reload logic.

Improved Application & MongoDB Performance

Fixing the most common performance issues helps a lot in the scaling process. The first thing to optimize is database indexes. Meteor doesn’t auto-magically add indexes–they need to be added explicitly. Indexes can provide a large performance gain.

Subscriptions and queries can also be optimized, with notable performance gains.

Scaling MongoDB

MongoDB is a core component of Meteor, so it needs to be prioritized where scaling and performance are concerned. Generally, MongoDB performs quite well. It supports both vertical and horizontal scaling. It can be run on a more powerful server, or use MongoDB sharding to scale horizontally.

Although MongoDB comes with good sharding tools, care needs to be taken when using sharding.

Use of a CDN

If your application is heavy on static content like images and video, you must not host these files using Meteor. Nowadays, using a CDN (Content Delivery Network) is standard practice, and it’s not very hard.

Okay, Does Meteor Scale?

Now you can decide if Meteor scales or not! In the next few articles, you’ll learn how to use commonly available tools and services to scale Meteor.

Read Next: How to Scale Meteor

Edited by Jon James (Head of Technology at Writebot)