We have around 18 different microservices each with their own configuration and repos. Setting them all up, keeping them up to date and managing them locally is a pain (and also turns my laptop into an inferno). Our staging environment is on Runnable which spins up all 18 of these in an isolated stack so I do no have to worry about configs or management.
Our stack does use swarm under the hood.
So far we have not seen any major issues with scheduling across 100 servers. However we have seen issues with the swarm event stream disconnecting. Our workaround was connect to the event stream of the docker engines directly and use the `since` parameter.
We currently use weave net to handle docker networking and have not had major issues with it.
Our stack is comprised of 15 stateless micro services all built with Nodejs.
MongoDB, Redis and Neo4j are used for persistence, caching, and dependency management.
The fundamental piece our system is RabbitMQ which is used as our event queue.
The architecture designed around events. When an application dies or a commit is made, an event propagated through the system.
We also use a lot of open source software.
Docker Machines + Swarm is used to schedule and run applications.
Docker Registry is used to store images.
We use WeaveWorkes weave for inter-container communication.
We are starting an engineering blog as well, keep posted for deep dives into our architecture!
http://blog.runnable.com/
Interesting. I haven't worked a lot with Nodejs (Haven't really gotten to it), but have with the rest of the stack. I'm into messaging and networking. Maybe drop me a line?
Obvious beginner question here, what use cases would this work well for? I ran the the code you linked, and understand vaguely whats happening. Could this say, be used to distribute database requests to another server?