This is a supplemental part in a series of articles on the Terraform E+A pattern. If you've landed here, it's recommended to read from the beginning to understand the purpose and context.

In this extra article we'll discuss some further optional extensions to the E+A pattern that may be useful either as more concrete examples of its application or to provide additional capabilities.

The Environment Domain Pattern

The E+A pattern is built on the concept of an environment as a container where applications can be deployed in a way that allows them to discover each other and communicate.

A great, low-ceremony way to allow one system to find another is to publish service addresses in DNS. If you're using Hashicorp Consul then you can get this largely for free using its built-in DNS server, but even if you're not you can use one of the several DNS service providers in Terraform to publish information in DNS.

A good pattern here is to establish a separate DNS zone for each environment. These might be subdomains of your application or corporate domain, or something entirely internal. For example, we might use the aws_route53_zone resource to establish a DNS zone for our QA environment:

resource "aws_route53_zone" "env" {
  name = ""

We can publish the domain name and zone_id of this resource into our shared data store along with all of the other shared environment infrastructure. Then each application can create its own records (named after each codebase name) within that domain. For example, in our store application:

resource "aws_route53_record" "main" {
  zone_id = "${module.environment.route53_zone_id}"
  name    = "store.${module.environment.domain_name}"
  type    = "A"

  alias {
    name    = "${aws_elb.server.dns_name}"
    zone_id = "${aws_elb.server.zone_id}"

After being deployed to both of our example environments the application can then be found at and, presumably,

Through systematic assignment of environment domain names and of the hostnames within each of these zones we can also clean up a wart in the mechanism by which applications join the environment. In our earlier examples the application configs had a variable environment_api_addr which defined the address at which a Consul server could be reached in the target environment. This means that in order to deploy an application we must first look up the IP address of one of the Consul servers. If we publish the Consul server addresses in DNS at consul.${module.environment.domain_name}} then we can derive what we need to know given just the environment's name:

variable "environment_name" {
  description = "Name of the environment to deploy the application into"

module "environment" {
    source = ""

    environment_name = "${var.environment_name}"

provider "consul" {
    address = "${module.environment.consul_address}"
terraform plan -var="environment_name=QA"

The join-environment module can now infer the Consul address as (for example) "consul.${lower(var.environment_name)}" and thus completely encapsulate all of the details of how the environment resources are located given an environment name.

Tag-based Discovery Pattern

In the primary examples in this series we used Consul as an example data store that can be used to pass information from one Terraform configuration to another. Another way to achieve this is to use a tagging system built in to the target system to use the resources themselves as the data store, via Terraform data sources.

At present the AWS provider has the most comprehensive support for this both due to AWS's consistent support of tags across most of its services and due to the growing set of Terraform data sources that can retrieve resources using these tags.

For example, with appropriate tagging on the network resources created within the environment configurations, our join-environment module can find the VPC id without the need for a secondary data store:

variable "environment_name" {}

data "aws_vpc" "env" {
    tags {
        Environment = "${var.environment_name}"

output "aws_vpc_id" {
    value = "${}"

This pattern is not universally applicable at present due to some limitations of the available data sources. In particular there is not a general mechanism for retrieving multiple resources that match a given set of filters, so we can't currently use the above pattern to find all of the subnets of a given VPC. This will hopefully improve in future versions of Terraform as the data source capabilites get stronger.

There is also the tradeoff here that while this avoids running a separate data store it effectively distributes the data widely across the various services in use, making it hard to get a holistic view of all of the data and make use of it via non-service-specific tools such as consul-template.

As the set of data sources grows across all providers, and as Terraform core gets more robust cabilities for data sources that return multiple results, this pattern will become more applicable across different backend services. However, a general data store such as Consul remains the most universally-applicable approach.