This started as a project to mess around with C, pthreads, SO_REUSEPORT and CMake. That is largely still the case but have since transitioned away from pthreads to libuv. Can look through version control history to see initial pthread work.
This has sort of evolved to become a nano version of NGINX's
architecture with a controller process controlling
multiple workers handling incoming connections which are distributed by
the kernel using SO_REUSEPORT
.
tcp-echo[ctrlr]
K| --> \_ tcp-echo[wrk1]
o/ E| --> \_ tcp-echo[wrk2]
User /| --> R| --> \_ tcp-echo[wrk3]
/ \ N| --> \_ tcp-echo[wrk4]
E| --> \_ tcp-echo[wrk5]
L| --> \_ tcp-echo[wrk6]
The easiest way to get things running is with Docker. These are the requirements necessary:
Once the requirements are installed simply run the following:
$ docker-compose up
Now send some data to the server:
$ docker-compose exec tcp-echo nc localhost 8090
foo
foo
bar
bar
Building everything with Docker is easy.
$ docker build .
Several base images are supported. Can check the requirements
directory to get an idea of which can be used and to add more.
Also, choosing between GCC and Clang is possible.
$ docker build \
--build-arg BASE_IMAGE=ubuntu:16.04 \
--build-arg CC=/usr/bin/clang \
--build-arg CXX=/usr/bin/clang++ \
.
Building manually can be a bit more involved as several requirements are
necessary on most platforms. Depending on what platform you are working
on, check the requirements
directory for one that matches. In there
are all of the items necessary to begin building.
If building on Mac OSX then Xcode is required. From there, use homebrew to get the rest of these items:
$ brew install \
autoconf \
automake \
cmake \
libtool \
valgrind
Regardless of platform, once you have all of the necessary requirements and the code cloned locally, the project can be built using theses steps:
- Create and enter build directory
$ mkdir build && cd build
- Generate build files
$ cmake ..
- Build the binaries
$ make
Development is again, best done in Docker by using a development version of the container which contains everything necessary.
$ docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
Once the development container is running the locally cloned code will
be mounted as a volume to /tcp-echo
so you can hack on it using local
editors. From there jump into the development container and build as
necessary.
$ docker-compose exec tcp-echo /bin/bash
$ cmake /tcp-echo
$ make
$ ./tcp-echo
The listening port 8090
is exposed locally so you can interact with
the server using local tools.
There are some simple tests that can be run to ensure basic functionality and no memory leaks. A testing version of the container needs to built before running the tests.
$ docker build \
--target test \
--tag tcp-echo-test \
--build-arg BASE_IMAGE=centos:7 \
--build-arg CMAKE_OPTS=-DCMAKE_BUILD_TYPE=Debug \
.
Once the testing image has been built, run the test script.
NOTE: At this time there are some problems with testing against the Alpine base image. Some issues mainly with musl libc.
$ ./test.sh
All of the tests are performed each commit to main
by
CircleCI. Once all tests pass in the pipeline, a
new image is published to Docker Hub.
Can invoke Valgrind to test for memory leaks and other memory access violations.
$ valgrind \
--error-exitcode=1 \
--leak-check=full \
--show-leak-kinds=all \
--track-origins=yes \
--trace-children=yes \
./tcp-echo
Been using tcpkali as a way to test throughput and performance.
$ tcpkali localhost:8090 \
--duration 90 \
--dump-one \
--connections 500 \
--connect-rate 300 \
--channel-lifetime 1 \
--message foo \
--message-rate 2
There are command line arguments available to modify the programs
behavior. See ./tcp-echo --help
for more information. Plans are under
way for modifying these using environment variables or a configuration
file.
TODO
TODO