This is the third part of a multi-part article on the Terraform E+A pattern. If you're just joining us, I suggest starting from the beginning to understand the goals of this pattern and the terminology we're using to describe it.

In the previous part we created two environments that provide infrastructure that will be shared by all of our applications. In this part we will write Terraform configurations to deploy the applications themselves, with a separate deployment for each environment.

Why Use Terraform for App Deployment?

Before we continue it's important to digress a little into why (and whether!) to use Terraform for deploying applications at all. Since Terraform's early days there has been, among many users, an idea that Terraform's purpose is to orchestrate the creation only of the shared infrastructure, and that application development should be instead done by directly using a different tools such as Kubernetes or Nomad.

It's important to note that this is not an either/or proposition. Although Terraform's bread and butter is deploying the low-level infrastructure from cloud platforms -- these were the first resources implemented, and the most prolifically used -- fundamentally Terraform is just a tool for wiring things together, and it is growing broader and broader platform support with each release. Terraform can, for example, be used to deploy jobs into Kubernetes and Nomad clusters, as an alternative to using the native tools for these platforms directly.

So now that we've established that we can use Terraform for app deployment, why should we? The major reason is that your application probably does not live isolated in its own world: all but the simplest applications have dependencies on other applications and on shared infrastructure. By describing these relationships via Terraform you can help future maintainers understand your system architecture and more easily implement architecture changes over time.

An additional emerging reason is the rise of mixed architectures that blend traditional deployment via virtual machines or containers with higher-level abstractions such as AWS Lambda and Amazon API Gateway. Terraform makes it straightforward to blend these technologies to use the best tools for each job, rather than being constrained such that applications must always consist only of jobs in a particular container scheduler.

With all of that said, for the remainder of this article I will stick to basic EC2 compute infrastructure because I expect it's the most familiar to readers and thus won't distract as much from the overall approach. The "immutable EC2 servers" approach illuatrated here is falling out of favor due to the amount of time it takes to deploy new code; as usual for the purpose of the pattern we care about how you connect things together rather than exactly what technologies you use, so you should feel free to substitute Heroku apps, AWS Lambda functions, or any other application hosting technology you like.

Per-application Terraform Config

Having set up our environments, our next task is to configure each of our applications to deploy into them. Whereas we created a separate configuration for each of the environments themselves, for applications we will use only a single configuration that is deployed once for each environment.

The different tradeoff at this layer is intended to encourage applications to deploy as similarly as possible in each environment, pushing any necessary differences down into the environment configuration which is presumed to change less often. The smaller the deviation between environments the more useful any pre-production environments will be.

An important concept within an application deployment is the idea of a deployment artifact. This is a packaged representation of the code for a particular version of the application, and its form will depend on your choice of deployment technology. For our simple example using EC2 instances, our artifacts will be an Amazon Machine Image (AMI) per application. If you are using container-based technology, you might instead produce a container image and upload it to a repository. If you're using AWS Lambda, your artifact will be a zip file on Amazon S3 containing the application code. How an artifact gets built and what form it takes is beyond the scope of this pattern, and you should feel free to use whatever build tools are idiomatic for your target platform.

I recommend keeping each application's build scripts and Terraform configuration within that application's own code repository. This makes it easier to evolve the build and deploy pipeline along with the code. Where you keep it in the repository is up to you, but for our purposes here we will create "build" and "deploy" directories in the root of each application repository containing the build scripts and Terraform deploy configuration respectively.

The "store" application

Here's how our directory structure might look for the "store" application that will provide our application's backend API, which we'll assume is a NodeJS application:

app.js
package.json
build/
  build.sh
deploy/
  vars.tf
  storage.tf
  server.tf
  public.tf

We're going to focus on the contents of the deploy directory here, assuming that the build.sh script does whatever is appropriate in order to produce an AMI that is configured to join the Consul cluster and run the application on boot.

Our vars.tf in this case deals with two concerns: specifying the artifact(s) to use for this deployment, and collecting necessary configuration data from the configuration store (Consul in our example) so we can successfully participate in the environment. Here's how it might look for this application:

Since this application is an API for a data store, it'll need somewhere to store the data. For the sake of this example we'll assume data lives in an Amazon DynamoDB table, so our storage.tf file looks like this:

Here we create our DynamoDB table and publish its name in a new part of our configuration store. This private prefix is where we'll publish things that are used internally within each application. Only the application's own servers will access this table, so it's considered private. The store part of the private/store prefix is the name of this application; keeping each application's keys separate will help keep our configuration store organized and prevent unintentional conflicts.

It's worth noting that we don't necessarily need to write private values into Consul if they will only be used within Terraform. In this case we're writing the value because the application's code will read it from Consul at runtime, independently of Terraform, but it can also be useful to publish things here for human reference when exploring or debugging the system, highlighting the most useful information and avoiding the need to dig into all the raw details in the Terraform state.

Now we finally get to deploying the application code itself. Again I will reinforce that using AWS instances for deployment here is just an example, and this approach would be equally applicable to any other technology that can run application code. With that said, here's our server.tf:

Here we've established a set of servers running our code (using the EC2 auto-scaling feature) and a load balancer in front of it that serves to give us a fixed hostname where we can access the application. We use the environment configuration information to automatically create the resources in the appropriate subnets to participate in the target environment, and name the resources to include the environment name so they can be easily distinguished when viewing the resources outside of Terraform.

Our final task for this application is to announce its location to the other apps that will depend on it. Again we do this in Consul, via public.tf:

The public/ prefix in Consul is used to represent app-specific values that are intended for other applications to consume. Again the "store" part of public/store/ is the application's name, keeping each application's public settings separate from others.

Because we're going to deploy a single configuration multiple times, duplicating the resources in each environment, we can use Terraform's "State Environments" feature to create multiple distinct states for this configuration. We'll start by establishing a state for the QA environment, as shown below.

terraform env new QA

(This requires that you be using a Terraform backend that supports this feature which, at the time of writing, not all do. This situation should improve in future Terraform versions.)

Since this configuration has some variables, we'll need to provide some additional values when we ask Terraform to plan: the id of the artifact that was produced by the most recent build run, and the address of the Consul cluster of the environment we're trying to deploy into:

terraform plan \
    -var="environment_api_addr=10.1.2.32:8500" \
    -var="server_ami_id=ami-abc123" \
    -out=tfplan

The address 10.1.2.32 is standing in for a server where the Consul API for the QA environment can be reached. It is this that determines that the application will deploy into the QA environment rather than the production environment.

The rest of the lifecycle proceeds as normal. Once terraform apply has completed successfully the application should be running at a hostname that can be discovered from the Consul store.

Once the deployment to QA is working as expected, we'll want to deploy to production too. To do this we'll need to create and switch to a separate state environment:

terraform env new PROD

We can then repeat the same lifecycle using the Consul API address from the production environment:

terraform plan \
    -var="environment_api_addr=10.1.3.12:8500" \
    -var="server_ami_id=ami-abc123" \
    -out=tfplan

Notice that the environment_api_addr value is now changed, with 10.1.3.12 standing in for a Consul server in production.

The same configuration is now deployed twice, with its entire infrastructure stack duplicated in each environment. Each will have its own separate pool of servers running inside the environment's network, and its own load balancer through which they can be accessed. The public/store prefix in each respective Consul store allows us to find the API load balancer for each environment.

The "editor" application

As discussed in part one, the "editor" part of the system is what authors and editors use to produce content in our hypothetical content management system. This application is one of two clients for the "store" API we deployed in the previous section.

This time our imaginary application will be an entirely browser-side application served as static files from Amazon S3's static website service, using the "store" API as its backend.

The directory structure for this one is quite similar to "store":

package.json
htdocs/
  app.js
  style.css
  index.html
build/
  build.sh
deploy/
  vars.tf
  app.tf
  public.tf

Once again we'll gloss over all of the hypothetical application code, since implementing a CMS UI far out of scope. However, for this application we'll assume that the build.sh file produces just a directory containing index.html, app.min.js and css.min.js files derived from the files in htdocs. Our different choice of delivery infrastructure leads to a different type of artifact, but the principle of building an artifact for each application version remains the same.

Looking again into the deploy directory, our vars.tf is similar to that of the previous application:

There are two new features in this configuration compared to the last.

First, we retrieve the information that was published by the "store" application as part of its deployment. This follows the same principle as retrieving the main environment configuration, and allows us to automatically discover the URL for the store API for the target environment.

Secondly, we retrieve the key config/editor/hostname, which we presume was put there directly by a human operator (or, alternatively, via some script that populates configuration into Consul). This allows us to vary slightly the configuration between environments, such that (for example) the production environment can use the hostname editor.example.com while our QA environment uses editor.qa.example.com.

Next we need to deploy the application itself. This looks a bit different than the previous example since we're deploying to S3, but the pattern still holds: the application's Terraform configuration is responsible for creating all of the infrastructure that is specific to that application, which this time is an S3 bucket and its contents:

Finally, we'll publish to Consul the URL at which the application is running. In this case there aren't yet any automated consumers of this information, but it is still helpful to humans that want to quickly find the application. This is once again done in public.tf:

The two examples we've seen so far demonstrate the general structure for application configs. They have three main parts:

  • Retrieve configuration data, which might either be direct application configuration or just discovery of settings published by other system components.

  • Create the application's delivery infrastructure, wiring it up as necessary to other components using the configuration data.

  • Publish the public-facing endpoints to the resulting infrastructure for discovery by other applications and for easy reference by human operators.

The deployment process for this application follows the same steps as for the previous, aside from the artifact variable now being a directory on local disk. Once again we would deploy it separately to each environment, using the same Terraform configuration for both but varying based on what we discover in the environment's configuration store.

With two apps deployed, we have just one left to complete, which we will do (after a minor digression) in part four!