How do we host our internal tools ...
AWS EC2 is such a beautiful tool. Imagine setting up a new instance with ease, make any service you need run on it so you can then expose it to the world, or host internal services and expose them via VPN to your coworkers without any issue; getting new public elastic ips, starting instances in two minutes, database instances spawned and ready to be used with a few commands, automated backups and updates...yes, heaven.
However, when the company grows and the internal tools start multiplying like little mushrooms in the middle of the mush, then you need to choose between using few servers hosting many services on each or hosting each service in an instance as small as possible. Either way, you end up with boxes impossible to maintain and with tons of different and incompatible requirements (libraries versions, languages, dependencies, patches, databases, and so on…) or a huge bill at the end of the month. Yes..., AWS is not a cheap service. As Bill Gates said: 'I didn't get rich by writing a lot of checks.'
Also, when you start checking the resource usage of your cluster of servers you notice the CPU load is not even close to one percent; the hard drives are mostly empty and ram usage in most cases is really low. At this point, you start questioning yourself... why are we paying all this money to AWS?
Where does this take us? Can we put all the services together in a box and isolate them in some way? YES, but how?
Well, to be honest, we are still implementing it; but so far the results are above all our expectations!
When I started to manage our datacenter we only had one ec2 box hosting our corporate website and a bunch of other boxes for our clients. After three years, the number of servers grew and decresed many times, because each new service was first tested on an individual box; after all was stable and ready, it was migrated to one of our permanent boxes. As you can imagine this process would not always be as quick as we wanted... At some point we ended up with ten boxes running a growing list of services, starting from the already mentioned corporate site, to our continuous integration service, code reviewers, monitoring systems, trackers, build machines, databases, automation cron tasks, just to name a few. Today we only have two boxes running permanently for our internal stuff. One of them is running a service that has not been migrated yet, since its docker version needs more testing, while the other one handles the rest of the services.
How big is that box and what is its resource usage? Well, it’s not that bad. We use a t2.large, which is an overkill, but we are still adding stuff into it.
The CPU usage is really low, except for one high consuming task that runs each time we send a project to build. This task is the only reason we don’t try a smaller box. I don’t want to afford slow response time from other services during the time the high consuming task is running.
Memory usage is at a 65% most of the time ... which is the only thing that makes me wonder if we may need to move to a r3.large at some point or leave AWS.
In order to keep the container volumes data, we have an elastic block volume connected to the instance and all the docker volumes are mounted so that the data is saved on this big volume.
After the migration, it’s the first time I feel i’m not wasting money and computing power on low usage applications. Maintenance over the different environments dropped significantly since I only have to address one at a time. Before you start your own journey, just remember one thing: even if you can isolate environments on each container... it’s a good idea to have a separate host to make all the testing.