This is the fourth part of a multi-part article on the Terraform E+A pattern. If you're just joining us, I suggest starting from the beginning to understand the goals of this pattern and the terminology we're using to describe it.

In the previous part we deployed two of our three applications into the two environments that we created in part two. In this part we're first going to discuss a way to shrink the per-application configuration boilerplate and then use it to configure our third and final application.

Reducing Application Config Boilerplate

In our configurations for the "store" and "editor" applications, we ended up duplicating some boilerplate code to retrieve information about the shared environment infrastructure:

### Retrieve environment configuration
# We need to obtain the configuration for the target environment so we
# can get the ids for the shared AWS resources we'll use.

provider "consul" {
  address = "${var.environment_api_addr}"
}

data "consul_keys" "env" {
  key {
    name = "name"
    path = "environment/name"
  }
  key {
    name = "aws_region"
    path = "environment/aws_region"
  }
  key {
    name = "aws_vpc_id"
    path = "environment/aws_vpc_id"
  }
  key {
    name = "aws_subnet_ids"
    path = "environment/aws_subnet_ids"
  }
  key {
    name = "consul_server_addrs"
    path = "environment/consul_server_addrs"
  }
}

This works, but it's non-ideal. This violation of the "don't repeat yourself" (DRY) principle means that each application depends directly on the details of how this information is represented in the configuration store, meaning that it must be maintained across many separate codebases should the details change in future.

Why might the details change? A pretty extreme example would be switching away from using Consul altogether: the AWS-related information here remains relevant, but would be retrieved from some other data store.

We can use a shared Terraform module to hide these details and expose a well-defined interface to all of the application configurations. To do this, we would establish a new codebase, which I will call join-environment, that contains a single file join.tf that contains the above configuration along with some additional variable and output declarations, as follows:

variable "environment_api_addr" {}

provider "consul" {
  address = "${var.environment_api_addr}"
}

data "consul_keys" "env" {
  key {
    name = "name"
    path = "environment/name"
  }
  key {
    name = "aws_region"
    path = "environment/aws_region"
  }
  key {
    name = "aws_vpc_id"
    path = "environment/aws_vpc_id"
  }
  key {
    name = "aws_subnet_ids"
    path = "environment/aws_subnet_ids"
  }
  key {
    name = "consul_server_addrs"
    path = "environment/consul_server_addrs"
  }
}

output "name" {
    value = "${data.consul_keys.env.var["name"]}"
}
output "aws_region" {
    value = "${data.consul_keys.env.var["aws_region"]}"
}
output "aws_vpc_id" {
    value = "${data.consul_keys.env.var["aws_vpc_id"]}"
}
output "aws_subnet_ids" {
    value = "${data.consul_keys.env.var["aws_subnet_ids"]}"
}
output "consul_server_addrs" {
    value = "${data.consul_keys.env.var["consul_server_addrs"]}"
}

Here we're retrieving the data exactly the same way as before, but we're additionally exposing that data via module outputs. In this case the mapping between the outputs and the Consul variables is straightforward, but you can make this as elaborate as you like. For example, it might be preferable to return the Consul server addresses and subnet ids as lists rather than as space-separated strings using the split function, thus avoiding the need for all users of this data to split it themselves.

Deploying the "renderer" Application

As we discussed in part one, the "renderer" application is the implementation of the public-facing portion of our example content management system where end-users can browse and read the content created by the site's editorial team.

For simplicity's sake we'll assume that the renderer is built in a very similar way to the "store" application, and thus has a very similar directory structure within its codebase:

app.js
package.json
build/
  build.sh
deploy/
  vars.tf
  server.tf
  public.tf

Once again we will gloss over the details of how the application works and how an artifact is built by build/build.sh, but we'll assume that this script produces an AWS EC2 machine image (AMI) that we can deploy.

Now that we've written our join-environment module, the vars.tf for this application looks a little different than what we saw for the other applications:

### VARIABLES
# To deploy we need to know two things: which artifact(s) are we deploying
# and which environment are we deploying to?
#
# For this example the environment is specified as the address where its
# Consul API is available, since that's the information we need to find
# the remaining configuration.
#
# The artifact itself is an AMI, assumed to be in the same region where
# this environment's infrastructure is deployed.
variable "environment_api_addr" {}
variable "server_ami_id" {}

provider "consul" {
  address = "${var.environment_api_addr}"
}

### Retrieve environment configuration
# We need to obtain the configuration for the target environment so we
# can get the ids for the shared AWS resources we'll use.

module "environment" {
    source = "github.com/examplecorp/join-environment"

    environment_api_addr = "${var.environment_api_addr}"
}

### Retrieve "store" application configuration

data "consul_keys" "store" {
  key {
    name = "api_base_url"
    path = "public/store/api_base_url"
  }
}

### Remaining provider configuration

provider "aws" {
  region = "${module.environment.aws_region}"
}

In this new, sleeker version of vars.tf we've replaced the data "consul_keys" "env" block with an instance of the join-environment module. We no longer need to enumerate the locations of all of the configuration keys we want to use. Instead, the join-environment module exposes a set of outputs that explicitly specify what configuration settings are available, so we can reference them via the more-straightforward syntax ${module.environment.aws_region}.

In this example we still have the inline data "consul_keys" "store" block for retrieving the configuration from the "store" application. Since there's only one key here it would likely be overkill to expose a Terraform module to abstract that one key, but it is certainly possible for each application to expose within its own codebase a Terraform module that exposes the application's public resources, allowing similar benefits to the join-environment module.

The remaining Terraform configuration files for this application are largely identical to that of the "store" application apart from the references to the data.consul_keys.env resource, so we won't repeat them here. Of course in practice if you have gone to the trouble of creating a join-environment module you would want to use it in all applications, so you'd likely go back end edit the other two applications to use this module in a similar way.

We're all done!

With the deployment of our third and final application, we've seen what it might look like to deploy a system consisting of three applications across two environments within the E+A pattern.

In the next part we'll recap what we've learned and discuss some desirable team dynamics that can result from using this pattern.