Creating Multi-cloud Terraform environment with the help of remote state backends and AWS S3

Illustration of a stylized character with pointy ears and a playful expression, seated and holding a magic wand, surrounded by a swirling cosmic background with stars and sparkles. Illustration of a stylized character with pointy ears and a playful expression, seated and holding a magic wand, surrounded by a swirling cosmic background with stars and sparkles.

Intro

In this lesson, we are going to learn how to use Terraform in a multi-cloud environment.

Additionally, we will also learn how to connect one Terraform environment to another by using remote state management.

But first, let's figure out what is a multi-cloud environment, also sometimes referenced as a hybrid-cloud environment.

As we discussed in the first lesson, modern infrastructure went beyond simple servers.

Every major cloud provider has dozens of various services - database as a service, load balancer as a service, analytics as a service and so on.

But not every cloud provider has all the services you need.

Also, similar services of two different cloud providers can vary in performance and in features.

At some point, you can decide to use Active Directory service from Azure, object storage service from Google Cloud Platform and serverless functions from Amazon Web Services.

But managing even one cloud can be a mess. And a couple of more on top, and you could be heading for disaster.

This is where Terraform is especially useful. It has dozens of providers and you can use as many providers as you want within same Terraform environment.

Resources from one cloud provider can have dependencies with resources from another provider.

The dependency graph we discussed in the second part of the course does not care about how many clouds do you need. Fundamental Terraform concepts work regardless of particular provider specifics.

The first provider we've used is Packet Cloud. In this video, we are adding the second one: Amazon Web Services. We are going to use AWS Route53 - the DNS service of Amazon.

I've mentioned that we will also learn how to connect one Terraform environment to another.

When I say "Terraform environment" I mean all resources managed by a single set of templates, resulting in a single state file.

Coding

We will start by creating the second Terraform environment.

I've prepared most of the code in advance.

In this file, you can see that we are creating a Route53 hosted zone for the labs.mkdev.me subdomain and an NS records. The NS record is referencing the base hosted zone, which is pulled by the data resource at the beginning of the file.

Screenshot of a code editor displaying Terraform configuration for AWS Route 53 resources with a DNS zone and name servers setup.

Notice that I am not configuring AWS provider here. Terraform is smart enough to automatically discover default locations of AWS credentials, which I already have set on my laptop.

There are just two things I need to add to this template.

The first one is a Terraform state backend.

Previously, we used default local file backend, which means that our state file was stored on the same system where we run Terraform. It's very easy to lose a local file, so at very least we should push it to git repository. But there are better options called remote state backends.

Terraform supports many backend types, including a number of databases and object storages. We will use AWS S3 as a very cheap and reliable option. I've already created the S3 bucket in advance. Keep in mind that it's the best practice to enable versioning for S3 buckets that store Terraform state files.

Inside the backend section I need to specify the bucket name, the key which is more or less a file name and an AWS region. Other backend types have different configuration options.

terraform {
  backend "s3" {
    bucket = "mkdev-terraform"
    region = "eu-central-1"
    key = "globals.tfstate"
  }
}

The final change I need to do here is to add an output with the hosted zone id in it. We will see why in a bit.

output "zone_id" {
  value = aws_route53_zone.labs.zone_id
}

Let me apply this template to provision all the resources.

A computer terminal screen showing the execution of Terraform commands for provisioning AWS resources with output indicating successful creation.

Let me also run terraform fmt to properly format the template. Check out our Tips and Tricks Videos to learn more about this command.

If I check the local directory I won't find the state file. But if I run terraform state list, I can still see all the resources. Instead of working with a local file, Terraform gets the state from the S3 bucket.

A screenshot of a computer terminal with text indicating commands related to Terraform, an infrastructure as code tool, primarily showing interactions with AWS Route53 service.

Now let's get back to our original template, the one that create a server on the Packet cloud.

Want I want to do is to create a new DNS record for the server.

To do this, I am adding a new resource aws_route53_record.

resource "aws_route53_record" "dns" {
  zone_id = ""
  name = "mkdev-${var.environment}.labs.mkdev.me"
  type = "A"
  ttl = "300"
  records = packet_device.test.access_public_ipv4
}

I am going to set some meaningful DNS name and reference the public IP of the Packet server, essentially adding a dependency between two resources from two different cloud providers.

To finish creating the DNS record, I need to specify the hosted zone id. The hosted zone id is stored in a different state file, managed by a different Terraform template. But it's not a problem, because I can use the terraform_remote_state data resource.

This resource is able to find the remote state and make resource attributes and outputs of the remote state available inside the template we are working with.

data "terraform_remote_state" "globals" {
  backend = "s3"
  config = {
    bucket = "mkdev-terraform"
    region = "eu-central-1"
    key = "globals.tfstate"
  }
}

I need to specify the s3 backend, the bucket name, the object key and the AWS region - this configuration is identical to the backend configuration we set before.

Now let's reference the output of remote state inside the route53 record resource.

data.terraform_remote_state.globals.outputs.zone_id

Finally, let's try to apply the template.

Screenshot of a computer terminal with error messages related to Terraform, indicating issues with plugin requirements and provider initialization for AWS.

Oops, something is missing. Terraform needs to be re-initiliazed so that it pulls AWS provider for this environment. It automatically determines that AWS provider is used just by checking our resources.

Let's try to apply the template again. I will just keep typing the region interactively for now. Notice the zone id here, pulled from the remote state file.

Terminal screen showing the execution of a Terraform apply command with progress logs indicating the creation of resources and a final output with a public IP address.

It worked just fine! Let's check if DNS record is in place.

A screenshot of a computer terminal with code output showing AWS resource creation statuses and the result of a DNS lookup using the 'dig' command.

We have a server inside Packet Cloud and a DNS record for this server inside AWS Route53. Not only we've prepared a multi-cloud Terraform template, we also learned how to connect multiple Terraform environments together.

That's it for this article. In next lesson, we will learn how to use Terraform modules.

DevOps consulting: DevOps is a cultural and technological journey. We'll be thrilled to be your guides on any part of this journey. About consulting


Here's the same article in video form for your convenience: