1) One code repository per application for multiple deployments
A codebase is a versioned code repository (thanks to tools like Git, Mercurial, or SVN). From a codebase, you can build immutable releases that can be deployed in different execution environments. A deployment is a running instance of the application.
There are two simple anti-patterns. First, multiple applications must be split up and placed in separate codebases if there are multiple applications from the same code base. Secondly, if several applications share a codebase, it would make sense to factor the code into a library (with its own codebase) that will be used by the other applications.
2) Dependency isolation and declaration
3) Store the configuration in the environment
The idea is to separate the code from the configuration. The configuration concerns everything that can vary from one deployment to another: the URL of an SMTP server, a database endpoint, the credentials to connect to an API like Twitter or Google Maps. They should not be confused with the application’s internal settings, which remain unchanged from one deployment to another. To be able to deploy an application on several environments, you can’t define the configurations in the code. We sometimes tend to put all this in an XML file, but this limits flexibility. This file will not be modified so that the configurations will be the same for each deployment.
Moreover, sensitive data such as credentials should not appear clearly in code or in a configuration file, mainly for security reasons. By the way, an interesting test to evaluate if configurations are outsourced is to imagine putting your code in open access tomorrow. Would this reveal sensitive information?
The way to go is to put this configuration in environment variables, which will be injected at runtime. Systems like Vault are designed to inject sensitive data at runtime.
4) External services as attachable and detachable resources
A Cloud application usually depends on external services (databases, file storage, SMTP server, …) and connects to them via resources, usually a URL and credentials for authentication. We have seen below that these resources should not be hardcoded but rather set using external configurations. Therefore, we have foreseen in the code that the environment’s configuration will allow connection with any external service.
The idea is to be able to detach and attach an external resource at any time. If one of the external services fails, we simply modify the resource in the Cloud environment. You don’t need to touch the code to change the external service for another one.
5) Assembly, publication, and execution
This principle supports the strict separation of the build and execution stages. From a codebase, we produce an artifact (a build, for example, a JAR file for Java code), to which we add external configuration elements that produce an immutable release. This release is intended to be deployed in the execution environment (dev, staging, production, …). Finally, the execution phase makes the application work in the execution environment by launching one or several processes.
The separation of phases is, above all, to maximize the delivery capacity and spare the teams from distrust during production releases thanks to high confidence in the CI/CD (Continuous Integration + Continuous Delivery and Deployment) process. This is also useful in production, where deployment tools allow rollback operations to return to previous stages in case of problems. Implementing a CI/CD pipeline will greatly facilitate this entire process. It is therefore important to version and archive each release.
6) Stateless processes
Each process is stateless and thus makes no assumptions about the contents of the memory before processing a request or its contents. The idea is that your application should not maintain a state. Cloud means scaling, with multiplications of processes to support the load. So, with multiple processes running the same task, you can’t assume that the same process will handle a future request.
If one of these processes needs to store data, it will be done via an external service. Indeed, the data produced by the processes are often volatile (cached or written to disk temporarily). If a 3 request transaction is to be performed, it should be assumed that different processes will handle the 3 requests and that this should not be a problem.
7) Use ports to expose services
The idea is that a 12-factor application is self-sufficient and does not rely on the injection of a web server at runtime (Apache server, Tomcat)… It is the application itself that exposes the port, usually HTTP. And it is the Cloud provider that should take care of the port assignment and the mapping with the outside, in particular, to manage the routing efficiently.
Your application is composed of one or more services, and each of them is accessible from the outside and by other services via a URL and a port. A service can become an external service for another application.
8) Concurrency and independent processes
To take advantage of the elasticity of the Cloud, your application must be able to decompose into a set of several processes independent of each other, some of which can run concurrently (or rather, cooperatively) with the others. Therefore, the perspective of scaling horizontally offers the ability for your application to support a larger load, by adding additional processes (workers) for the tasks that require high computation resources.
If your application includes 2 business services Users and Orders, and the Orders service is called 10 times more often than Users, separating the two services into two distinct processes (e.g., 2 micro-services) allows you to deploy one Users process, and why not up to 10 Orders processes. The latter are concurrent because they run in the same infrastructure (e.g., a K8S cluster). Nevertheless, they cooperate because each one can receive tasks if the others cannot.
9) Quick starts and graceful stops
In the Cloud, processes are perceived as volatile and disposable. They can be started or stopped at any time. Therefore, processes must be ready to use quickly: if they are launched for scalability purposes, it is important to be operational quickly to avoid saturation of running instances (or denial if none are running). In addition, developers should plan for graceful shutdowns, i.e., plan for when the application is being disconnected or is crashing: it is best to intercept these signals so that the work in progress is returned to a job queue, so as not to corrupt any data.
10) Less gap between development and production
The goal is to avoid gaps between the development context (when code is written and deployed locally) and the production context (when the application is deployed to users). These disparities can exist when the people working in each context do not collaborate effectively or do not use the same tools (if a developer’s local database version is not the same as the one used in production).
12-factor applications are designed for continuous deployment, hence minimizing the gap between development and production. The main guidelines are to ensure that a code change can be quickly sent to production, that the people involved in development are also involved in deployment, and finally, that the tools used in both contexts are as close as possible. This is very much in line with DevOps principles.
11) Handle logs as flows
Logs are streams of events that trace the behavior of an application. An event is usually some raw text with date info. These logs are usually written in files on the hard disk. The code that manages logs is usually embedded in the application. The application is thus responsible for the production and storage of the logs. The purpose of the logs is to be processed and analyzed a posteriori by an external service. However, writing to disk does not make sense because this resource is ephemeral in the Cloud. We have no idea where our applications are running (since they can be launched in instances created dynamically by the Cloud Provider).
The idea here is to relieve the application of the whole log storage phase. The application must generate events on its standard output, and that’s all. It is up to the execution environment to collect, assemble and store these logs. One can imagine using external services such as Logstash to route log streams from different applications to different storage systems for further processing (think about the ELK stack). And finally, this allows for refocusing the application code on the core business.
12) Administration processes
Maintenance operations such as database migrations or update scripts are sometimes necessary in production environments. These administration processes should be one-time operations and especially executed in the same execution environment as the application. The code implementing these operations should be delivered with the application code to avoid synchronization problems.
The 2 questions that come to mind following this are:
- Do all of these principles have to be followed for your application to be shaped for the cloud?
- Do all these principles reach a consensus?
Well, to answer quickly, I would say no and no. But that’s the easy way out, and I don’t really like that. In fact, one can imagine that many existing Cloud applications do not respect these principles and still manage to run. But is this really the only criteria that matter? These 12 points are, above all, here to help you start your thought process: it should be seen as an exercise where we take a step back from our Cloud application, and try to understand why we do not fully respect this or that element. There is a lot of content on the Web that discusses these twelve-factor app proposing different visions and critiques to go further. It’s always interesting to contrast these points of view!
Beyond the twelve-factor app