There's a network-server programming pattern which is so popular that it's the canonical approach towards writing network servers:

Flowchart of network server design described below Flowchart of network server design described below   ...

This design is easy to recognise: The main loop waits for some event, then dispatches based on the file descriptor and state that the file descriptor is in. At one point it was in vogue to actually fork() so that each file descriptor could be handled by a different thread, but now "worker threads" are usually created that all perform the same task and rely on the kernel to schedule file descriptors to them.

A much better design is possible because of the epoll and kqueue, however most people use these "new" system calls using a wrapper like libevent which just encourages the same slow design people have been using for over twenty years now.

The design I currently use and recommend involves two major points:

  1. One thread per core, pinned (affinity) to separate CPUs, each with their own epoll/kqueue fd
  2. Each major state transition (accept, reader) is handled by a separate thread, and transitioning one client from one state to another involves passing the file descriptor to the epoll/kqueue fd of the other thread.
Flowchart of improved network server design...

This design has no decision points, simple blocking/IO calls, and makes simple one-page performant servers that easily get into the 100k requests/second territory on modern systems.

Creating the thread pool

Ask the operating system how many cores there are. Sometimes reserving some cores make sense, so let the user lower this number. If raising this number helps, then your state transitions are too complex and you will need to break them up.

pthread_attr_t a;

Then in each thread, do any per-thread initialisation and allocate your kevent/epoll fd:

void*run(int id){
#ifdef __linux__

Setting processor affinity is something that must be done in-process on some platforms, but the system administrator should be able to provide input:

cpu_set_t c;CPU_ZERO(&c);CPU_SET(id,&c);pthread_setaffinity_np(pthread_self(),sizeof(c),&c);

Apple OSX doesn't support pthread_setaffinity_np() directly, but what we need is easy to implement:

extern int thread_policy_set(thread_t thread, thread_policy_flavor_t flavor, thread_policy_t policy_info, mach_msg_type_number_t count);
thread_affinity_policy_data_t ap;
thread_extended_policy_data_t ep;

Creating the listening socket

Increase the number of file descriptors to handle the number of connections you want to handle n=2048 per thread:

getrlimit(RLIMIT_NOFILE, &r);

Disabling lingering is important otherwise you'll run out of file descriptors:


If the client speaks first (as in HTTP), then enable deferred accepts on Linux:

#ifdef __linux__

The accept-loop

There's no point in waiting for epoll/kevent since all this loop does is accept connections:

#ifdef __linux__
 struct epoll_event ev={0};;|EPOLLRDHUP|EPOLLERR|EPOLLET;
 struct kevent ev;

Any socket options should be enabled before handing to the next step. Consider enabling a timeout (SO_RCVTIMEO) to the socket instead of tracking timers in your application:

struct timeval tv={0};

Hasan Alayli observes that on Linux you can use accept4() to combine accept() and the fcntl().

Scheduling tasks can usually be done by rotating through the threads:

int pick(void){ static int c; ++c; return worker[c%t].q; }

...however some workloads benefit from some analysis here and choosing a worker based on the probability that input will belong to one type or another can actually improve the mean throughput if some bias is introduced into the pick() routine. Experiment and benchmark.

The request-loop

A task that has some input will begin with a epoll_wait() or kevent() step:

#ifdef __linux__
struct epoll_event e[1000];
 else handle(e[i].data.fd);
struct kevent e[1000];
 else handle(e[i].ident);

Each file descriptor will only be used by a single request in a single state, so having an array for input buffers for file descriptors can simplify a lot of algorithms. handle(fd) can read from the input, use write() or sendfile() as necessary however if more than one syscall is needed schedule the task with a worker that performs that operation.