This is the fourth part of a multi-part article on the Terraform E+A pattern. If you're just joining us, I suggest starting from the beginning to understand the goals of this pattern and the terminology we're using to describe it.

In the previous part we deployed two of our three applications into the two environments that we created in part two. In this part we're first going to discuss a way to shrink the per-application configuration boilerplate and then use it to configure our third and final application.

Reducing Application Config Boilerplate

In our configurations for the "store" and "editor" applications, we ended up duplicating some boilerplate code to retrieve information about the shared environment infrastructure:

This works, but it's non-ideal. This violation of the "don't repeat yourself" (DRY) principle means that each application depends directly on the details of how this information is represented in the configuration store, meaning that it must be maintained across many separate codebases should the details change in future.

Why might the details change? A pretty extreme example would be switching away from using Consul altogether: the AWS-related information here remains relevant, but would be retrieved from some other data store.

We can use a shared Terraform module to hide these details and expose a well-defined interface to all of the application configurations. To do this, we would establish a new codebase, which I will call join-environment, that contains a single file join.tf that contains the above configuration along with some additional variable and output declarations, as follows:

Here we're retrieving the data exactly the same way as before, but we're additionally exposing that data via module outputs. In this case the mapping between the outputs and the Consul variables is straightforward, but you can make this as elaborate as you like. For example, it might be preferable to return the Consul server addresses and subnet ids as lists rather than as space-separated strings using the split function, thus avoiding the need for all users of this data to split it themselves.

Deploying the "renderer" Application

As we discussed in part one, the "renderer" application is the implementation of the public-facing portion of our example content management system where end-users can browse and read the content created by the site's editorial team.

For simplicity's sake we'll assume that the renderer is built in a very similar way to the "store" application, and thus has a very similar directory structure within its codebase:

app.js
package.json
build/
  build.sh
deploy/
  vars.tf
  server.tf
  public.tf

Once again we will gloss over the details of how the application works and how an artifact is built by build/build.sh, but we'll assume that this script produces an AWS EC2 machine image (AMI) that we can deploy.

Now that we've written our join-environment module, the vars.tf for this application looks a little different than what we saw for the other applications:

In this new, sleeker version of vars.tf we've replaced the data "consul_keys" "env" block with an instance of the join-environment module. We no longer need to enumerate the locations of all of the configuration keys we want to use. Instead, the join-environment module exposes a set of outputs that explicitly specify what configuration settings are available, so we can reference them via the more-straightforward syntax ${module.environment.aws_region}.

In this example we still have the inline data "consul_keys" "store" block for retrieving the configuration from the "store" application. Since there's only one key here it would likely be overkill to expose a Terraform module to abstract that one key, but it is certainly possible for each application to expose within its own codebase a Terraform module that exposes the application's public resources, allowing similar benefits to the join-environment module.

The remaining Terraform configuration files for this application are largely identical to that of the "store" application apart from the references to the data.consul_keys.env resource, so we won't repeat them here. Of course in practice if you have gone to the trouble of creating a join-environment module you would want to use it in all applications, so you'd likely go back end edit the other two applications to use this module in a similar way.

We're all done!

With the deployment of our third and final application, we've seen what it might look like to deploy a system consisting of three applications across two environments within the E+A pattern.

In the next part we'll recap what we've learned and discuss some desirable team dynamics that can result from using this pattern.