Skip to content

Latest commit

 

History

History
61 lines (47 loc) · 2.77 KB

README.md

File metadata and controls

61 lines (47 loc) · 2.77 KB

Yet another shootout

So you want to see how well your programming language of choice deals with highly concurrent network serving, and you want it to be under controlled circumstances. This repository contains the beginnings of a way to compare how different languages scale under a specific kind of load.

The intention of this repo is that other implementations will be added. Send me pull requests to add yours.

The workload

The reference server is a very lightweight version of a proof-of-work server. You provide it with a string, and it returns that string and a nonce. Depending on how the server is set up (which is hardcoded right now), the string and the nonce, when concatenated and fed into OpenSSL's SHA256 implementation, will produce a hex-encoded version of the hash with the last x digits being 0 (where x is hardcoded, currently at 2 -- it's important that the client and server match in this regard).

Since the goal of this benchmark is to provide just enough CPU load to keep it from being a test of how quickly you can exhaust the available file descriptors and / or sockets on the server's host, the workload should stay fairly light, in the 10-30ms range.

The protocol

Connect to port 1337 on the target machine. As soon as the connection is established, the server sends 'ok\n'. The client then sends the string to be hashed. The server then computes a working nonce and returns the original string, ':', and the computed nonce, and closes the connection. Keep-Alive and pipelining could be added, but that would defeat the whole point.

Benchmarking

There is a client, written in Node, that you can use to test the performance of your implementation. If I have time, I'll write something that will generate a leaderboard. The client is reasonably good about not letting you push it around, and tries to allow the server a chance to warm up, as well as hitting it with a decent level of concurrency. Gaming the client is allowed, but see below to understand how doing so may not be so easy.

Caveats

The benchmarking client doesn't trust your server:

  • It verifies that the string returned by the work server is the same string that it originally sent.
  • It proves, by itself, that the work was done by verifying that the string + nonce combo produce the required result.
  • No timing information is generated by the server; this means that the timing and throughput numbers returned by the client include network overhead, but it's hard to think of a real-world network service where this wouldn't be the case.

TODO

  • Tweak client performance. You know, with profiling and stuff.
  • Build a distributed client-server version of the benchmarker so that we can hit a service from multiple hosts simultaneously and better simulate a realistic, intensive workload.