In the previous blog post, we introduced Cluster. It is a load balancer sits in the application layer, so, it doesn’t need additional hardware or software. With Cluster, you can load-balance a Meteor app by just installing a package.
Some people are worried about the performance of Cluster and we expected that. So, we ran a load test recently and here are the results.
Since this is a different kind of load balancer, it’s hard to compare with other load balancers. We decided to benchmark it against itself by monitoring the number of requests processed by each server. This is how we did it.
While running the load test, we tracked and recorded metrics via Kadira.
Our app is a pretty simple Meteor app. It serves a single publication. When a client subscribes to the app, it sends a set of documents, which is around 200 kB in size.
The load testing suite is also pretty simple. The client connects to the server and takes out a subscription. Once the app receives data back, it disconnects and tries to connect again. The load testing suite is written in MeteorDown, which manages invoking multiple concurrent clients.
We used Heroku to scale up our load testing suite. We can scale up to as much as load we need.
The test app and the load testing suite are available on this GitHub repository.
For app servers, we used single-core 512 MB servers from Digital Ocean. We chose Ubuntu 14.04 as the operating system. The deployment was done through Meteor Up and we didn’t do any kind of OS level tweaks other than the basic setup done by Meteor Up.
So, we ran the load test for the following scenarios:
We captured the following metrics using Kadira:
Here are the results:
For all the above scenarios, server response time is less than 8ms.
If we look at these as a graph here’s what they look like:
Here’s the CPU usage:
RAM usage is always the same (including RAM usage by the application as well):
Cluster has some issues when using one balancer. Currently, Cluster uses a round robin algorithm to distribute the traffic. Thus, each server processes the same amount of traffic.
When Cluster runs in single-balancer mode, it’s single balancer needs to do more proxying rather than processing requests itself. If it could proxy more requests, we could get even more throughput from this scenario.
When using multiple balancers, there was no such issue. Each server processed more than 2500 subscriptions per minute and that’s very close to the control test, which was 2700 subscriptions per minute.
There are a lot more things we can optimize. Even at this stage, we get really impressive performance and we are really proud of it.
Currently, we are working on multi-core support for Cluster. After that, we’ll work on a new routing algorithm based on resource utilization (CPU, RAM, Eventloop). Then, Cluster will balance traffic in a predictable manner and we will get the maximum throughput.