This session is about taking agile methods all the way to production. At Betable, we've settled into a process for delivering our service that lets us deploy whenever we want while retaining high confidence in our service. Our confidence is not to be confused with cockiness, though: our confidence is built by processes and tools.
Automated testing guides our development process and gates the road to production. Every engineer plays an active role in QA as opposed to having a dedicated QA team; we'll talk briefly about the advantages and disadvantages of this decision. We'll discuss the particular way we organize our code into libraries and services to maximize testability and minimize coupling between both systems and the engineers that build them. We'll discuss mocking (and try to refrain from getting religious about it), where we use it, where we don't, and why.
A continuous integration server is responsible for running tests automatically in response to each push. That's hardly novel; it's what happens after the tests pass that gives us our confidence. We'll discuss how we used to package software for deployment, how we do it now, why we changed, and why it works so well. I'll talk about how this process helps us keep dependencies few and simple and why this is so important to us.
We'll then turn our focus to Betable's staging and production environments where the effort we put into packaging and testing pays its dividends. I'll compare several strategies for process supervision and restarting during deploys and attempt to judge them by their impact on availability at these critical inflection points. We'll talk about the tradeoffs between retrying failed requests and the potential for queries of death when we do so.
In staging and production we trade testing tools for verbose logging and detailed metrics. Much is said elsewhere about these tools so I will focus on how they feedback into our process. Metrics are our proof that our confidence is not in vain. Logs are out visibility i