This is the follow-up of my article about Microservices Frameworks.
Introduction
As promised, this is a little post about a benchmark I realized on microservices frameworks.
It is not intended to be a precise (do not trust the numbers) benchmark. This post gives a global idea of what you can have out-of-the-box with these libraries. To go further, you should read the documentation and eventually fine tune your app if necessary. |
Conditions
The framework / library is used as-is, without any tuning (again, you can probably obtain better performances with some configuration).
The raw results are not really important (it depends on the hardware, network, …), so I will only consider relative results. The most important is to run the benchmark with identical conditions, in order to have comparable results.
If you want more precise and serious benchmarks, see https://www.techempower.com/benchmarks/.
Methodology
The bench runs on a Windows machine, with a Java 8 update 92 JVM. The server JVM is booted at the beginning of each bench.
The bench uses Gatling to run HTTP requests:
-
Get on the
/hello
endpoint, with 200 users making 1000 requests each (total: 200000); a request is considered as failed if it exceeds 60s. The test is time boxed (limit: 5 minutes). Response is a json-encoded POJO. -
Post on a
/hello
endpoint, with 200 users making 1000 requests each (total: 200000); a request is considered as failed if it exceeds 60s. The test is time boxed (limit: 10 minutes). Payload is json, and decoded to a POJO.
The results are the mean of three runs, extracted from the Gatling reports.
Results
Framework | Package size (MB) | Startup time (ms) | GET (over 5 min) | POST (over 10 min) | ||||||
---|---|---|---|---|---|---|---|---|---|---|
total |
failed |
mean (ms) |
throughput (req/s) |
total |
failed |
mean (ms) |
throughput (req/s) |
|||
Dropwizard |
15 |
1047 |
200000 |
0 |
274 |
685 |
200000 |
0 |
270 |
700 |
Restlet |
4,2 |
110 |
180000 |
196 |
300 |
590 |
200000 |
221 |
282 |
578 |
Restlet / Jetty |
5,8 |
110 |
82926 |
12 |
702 |
277 |
186798 |
25 |
630 |
320 |
Restlet / Simple |
4,5 |
110 |
200000 |
0 |
258 |
739 |
200000 |
0 |
255 |
735 |
Sparkjava |
4,1 |
290 |
64477 |
9 |
877 |
227 |
110032 |
9 |
1078 |
180 |
Spring Boot / Tomcat |
14 |
6905° |
200000 |
0 |
228 |
841 |
200000 |
0 |
232 |
814 |
Spring Boot / Jetty |
13 |
6905° |
197668 |
0 |
291 |
820 |
200000 |
0 |
229 |
817 |
Spring Boot / Undertow |
14 |
6905° |
200000 |
0 |
230 |
828 |
200000 |
0 |
226 |
824 |
vertx |
5,1 |
7250° |
200000 |
0 |
196 |
939 |
200000 |
0 |
210 |
891 |
° results are really different on my Linux laptop
Observations
Vertx is really the faster among all the competitors, moreover it is really strong under heavy load. Note that on my Linux laptop, it starts within 200ms (I didn’t find the reason).
Just behind is Spring Boot. Interestingly, whatever the HTTP connector (Tomcat, Jetty, Undertow) you choose, the final result is almost the same, Jetty being a (very little) bit less good. Again, Spring Boot starts very faster on my Linux laptop (about 2s, instead of 7!).
To complete the podium, Restlet, using the Simple connector. Using Jetty or the internal connector isn’t a good choice without configuration, it seems. Restlet is really faster to bootstrap.
Finally, Spark is clearly the worst player, with only one third of the requests handled in the allowed time (for GET), and a mean time of 877 ms!
Just a last thing, consider the size of the package: lighter is only 4 MB, heavier is 15!
Final word
Thank you for reading. Feel free to comment, and to fork the project here: https://github.com/cdelmas/microservices-perf.