Love and hate for processes

So, I've been thinking about processes lately. They're an amazing abstraction - given how old they are, its really impressive they've gotten us as far as they have. They're so successful that even our users have internalised them. We invented modern processes to do multi-user isolation on unix mainframes in 1969. Now in 2015 the same API is used to separate apps on my phone.

Processes are an elegant abstraction - they're simple and clean. But the process abstraction isn't the right API for making backend services.

Lets think about the process abstraction for a moment. The API has a couple very clear, very obvious boundaries:

  • A process runs on exactly one computer. The process has access to the CPUs and networking, ram, filesystem, etc of that computer, but nothing else. If processes were people, the people would be stuck forever in their houses forced to yell to other processes in other houses to get anything done.

This is a little strange, because the kernel already does multiprocessing - we know how to distribute work between CPUs already. But we never apply the same principles between computers. For some reason, that problem is the application developer's responsibility. It must be solved anew for each program.

  • As far as the kernel is concerned, processes are just a black box of bytes with resources. A process is either running or it doesn't exist. To edit the code a process is running, we have to stop and restart the entire program. Why can't I hotswap my code in production?

In a sense, the buttons I have for running programs are turn on until you crash and kill. The buttons I want are turn on, turn off and replace code. I want my service to automatically restart itself (or whatever) when it crashes. I want my service to run on as many cores, on as many computers as it needs given the load. I want that to be automatic.

Upstart and systemd help bridge the gap here by running services automatically and restarting them when they crash. But they don't make hotswapping code any easier. They also don't do any multi-computer process management. And even with those caveats, upstart and systemd are so complicated! Upstart is by far the simpler of the two, and it evolved its own mini programming language.

Even inside a computer the process / thread model takes us to weird places. Think about all the different ways we've written our IO work queues over the years. There's thread pools, async IO with event loops, single threaded process pools coupled with load balancers. Are you using cooperative multitasking or kernel threads? Or maybe userland threads? And if you really want high performance, there's complete userland networking stacks. All of these systems are functionally identical - in a sense, they're completely equivalent implementations of the same code. We have so many options because kernel threads, processes and the kernel's scheduler are all such complicated, heavy pieces of infrastructure. There's huge tradeoffs involved in using them. And the decision as to whether or not to use them runs so deep in our software.


In a sense, the thing I want to express to the computer is:

On an incoming request:

  • Find an available CPU core in my data center
  • Run my code there.

If no CPUs are available, queue the request and handle it when you can.

Oh, and cache my code. And sometimes there'll be connection-specific state (websockets, etc) to manage too.

Sometimes my code will change. When that happens, future requests should be handled by the new code instead of the old code. (And there might be some rollover code here - like we might have to tell clients to refresh or do DB migrations or something).

There's a lot of moving parts here, and messing with the internals of running programs sounds scary and complicated. The hard part isn't building something which solves all these problems - google, amazon and hadoop have all managed it. I think the hard thing will be finding our new process abstraction. We need something simple and small and fast. I want something as clean as a process, but which I can turn on and off, or run on any computer. Modern datacenters leave an unbelievable amount of performance on the table because it takes so much work to move past the 1 application = 1 computer limitation.

Maybe what I want is the Erlang VM - or some modern version of it. I'm not really sure yet. But if Alan Kay's research team can get a complete 2d rasteriser written in 435 lines of code, I have hope that we can scale our services with something simpler than hadoop and its 2.1 million lines.

I don't quite know what any process replacement should look like, but we need to think about it. The idea that a process is a black box of memory and code which runs on a single computer until it crashes isn't good enough anymore. I think we can do a lot better.