This interview was done for our is the co-founder of Honeybadger, which handles production monitoring for web developers and offers zero-instrumentation, 360 degree coverage of errors, outages and service degradation.
For context, how big is your engineering team? Are you using microservices and can you give a general overview of how you’re using them?
Our engineering team is the whole company — three people. We are using a couple of microservices. One processes all the incoming API traffic for errors that are being reported for our customers’ applications. The other handles receiving logplex traffic from Heroku applications, recording errors that are reported from Heroku’s platform.
Did you start with a monolith and later adopt microservices? If so, what was the motivation to adopt microservices?
We started with a Rails monolith, and we adopted microservices at the point that the inbound API traffic started to overwhelm our app. It became apparent that we needed to scale them differently, and that we could benefit from having a lighter-weight app handling the API traffic. Splitting out that microservice wasn’t too complicated, as we had already isolated the API code in the monolith by using job queues.
How did you approach the topic of microservices as a team/engineering organization?
There wasn’t much need for alignment — we had a clear sense that simply moving a piece of functionality from our monolith to a separate app that had a single responsibility qualified as implementing a microservice.
How much freedom is there on technology choices? Did you all agree on sticking with one stack or is there flexibility to try new?
We gave ourselves 100% freedom in considering technology choices. We did spikes on two or three technologies that we opted not to use in the end, primarily due to not wanting to complicate our deployment story. In other words, while we considered introducing new languages and new approaches into our tech stack when creating our microservices, and we actually did deploy a production microservice on a different stack at one point, we decided to stick with technology that we knew in order to not add complexity to our ops stack. We periodically revisit that decision to see if potential performance or reliability benefits would be gained by adopting a new technology, but so far we haven’t made a change.
Have you broken a monolithic application into smaller microservices? If so, can you take us through that process?
We looked at what pieces of functionality were logically separate from others. In our case, it was clear that data ingestion could and should be completely separate from our customers’ interaction with their data stored in our database. At that point it was a straightforward matter of removing the code that handled the API endpoints from our monolith and putting that code into a separate Rack app. We also moved the relevant tests from one app to the other. One key decision that we made at the outset was that the new app should have no access to the primary database. It would need to be able to operate completely independently, which would allow our API to continue to receive data even if our main database was down or our main service was unavailable. We rely heavily on Redis, not only for storage for the job queue, but also as a caching layer for the data from the main database that the microservices need to do their work.
How have microservices impacted your development process? Your ops and deployment processes?
Our development process hasn’t been impacted much. We do have some additional context switching costs when moving from one codebase to another, but our monolith gets updated far more often than our microservices do, so even that is minimized. I guess the lesson learned is that it helps to set boundaries around a microservice according to changes in business logic. If you have a one area of your code that almost never changes, but another area that changes regularly, it may make sense to use that as an indication of what can be split off into a microservice. Ideally you’d be able to create your microservice and then be able to run it without much change for an extended period of time.
When we split off our microservices we evaluated other programming languages (Elixir and Go instead of Ruby) and deployment strategies (Docker or blue/green EC2 instances instead of Capistrano). We even went to production with Elixir and Docker for a while. In the end, though, we decided against introducing additional complexity in our ops and deployment processes, and we ended up deploying a Rack app with Capistrano. We did, however, segregate the microservices into separate autoscaling groups with their own load balancers, and the ansible roles we use to configure the instances differ, so we still added some operational burden even while using the same stack.
How have microservices impacted the way you approach testing?
We haven’t seen an impact to our approach to testing. Each microservice has tests, just like our monolith does, but we don’t have cross-service tests, since they don’t talk to each other directly. Since data flows only in one direction, and since we use a message bus for that data, we just test our microservices in isolation.
How have microservices impacted security and controlling access to data?
Security and data access haven’t been issues for us since our microservices only ingest data and are disconnected from the rest of our system.
Thanks again to Ben for his time and input! This interview was done for our Microservices for Startups ebook. Be sure to check it out for practical advice on microservices.