Deploy AWS VPC Site with Terraform

Objective

This guide provides instructions on how to create a site using Terraform and deploy to Amazon Web Services (AWS). For more information on sites, see F5 Distributed Cloud Site. With this procedure, you have two primary methods of deploying the site. The first method is the standard option that leverages F5 as the Terraform provider. The second method provides more deployment customization and more control on the objects that will be orchestrated within the AWS VPC, which in turn provides users with the ability to cater to more diverse use cases. The second method uses AWS as the Terraform provider, not F5.

For more information about F5 as a provider, see Terraform.

You can deploy an F5® Distributed Cloud Services Site using one of the following methods:


Deploy Site Using F5 Distributed Cloud Services as Provider

Prerequisites for F5 as Provider

  • A Distributed Cloud Services Account. If you do not have an account, see Create an Account.

  • Resources required per node: Minimum 4 vCPUs and 14 GB RAM.

  • If Internet Gateway (IGW) is attached with the VPC, at least one of the routes should point to IGW in any route table of the VPC.

  • Subnets that will be used for Site Local Outside (SLO), Site Local Inside (SLI), and workload subnets should not have a pre-existing route table association.

  • Internet Control Message Protocol (ICMP) needs to be opened between the CE nodes on the Site Local Outside (SLO) interfaces. This is needed to ensure intra-cluster communication checks.

  • UDP port 6080 needs to be opened between all the nodes of the site.

Deploy to AWS

Step 1: Confirm Terraform is installed.

In a terminal, enter terraform version. If you need to install, follow the instructions at the official guide.

Step 2: Create API credentials file.

Log into Console and create an API 12 certificate file and then download it. Use the instructions at Credentials for more help.

Step 3: Create a new directory on your system to place files for deployment.

Create a new directory on your system to place files for deployment.

Step 4: Create the deployment file.
  • Create a file and name it main.tf file, and place it in the newly created directory.

  • Copy and paste the following information into the file:

          terraform {
  required_version = ">= 0.13.1"

  required_providers {
    volterra = {
      source = "volterraedge/volterra"
    }
  }
}

variable "site_name" {}

variable "aws_access_key" {}

variable "b64_aws_secret_key" {}

variable "aws_region" {
  default = "us-east-2"
}

variable "aws_vpc_cidr" {
  default = "192.168.0.0/20"
}

variable "aws_az" {
  default = "us-east-2a"
}

variable "outside_subnet_cidr_block" {
  default = "192.168.0.0/25"
}

resource "volterra_cloud_credentials" "aws_cred" {
  name      = format("%s-cred", var.site_name)
  namespace = "system"
  aws_secret_key {
    access_key = var.aws_access_key
    secret_key {
      clear_secret_info {
        url = format("string:///%s", var.b64_aws_secret_key)
      }
    }
  }
}

resource "volterra_aws_vpc_site" "site" {
  name       = var.site_name
  namespace  = "system"
  aws_region = var.aws_region
  ssh_key    = "ssh-rsa XXXX"
  aws_cred {
    name      = volterra_cloud_credentials.aws_cred.name
    namespace = "system"
  }
  instance_type = "t3.xlarge"
  vpc {
    new_vpc {
      name_tag     = var.site_name
      primary_ipv4 = var.aws_vpc_cidr
    }
  }
  ingress_gw {
    aws_certified_hw = "aws-byol-voltmesh"
    az_nodes {
      aws_az_name = var.aws_az
      disk_size   = 20
      local_subnet {
        subnet_param {
          ipv4 = var.outside_subnet_cidr_block
        }
      }
    }
  }
  logs_streaming_disabled = true
  no_worker_nodes         = true
}

resource "volterra_tf_params_action" "apply_aws_vpc" {
  site_name        = volterra_aws_vpc_site.site.name
  site_kind        = "aws_vpc_site"
  action           = "apply"
  wait_for_action  = true
  ignore_on_update = true
}
        
  • Open the file and configure any necessary fields. The example above is for an ingress gateway site. You can change the parameters for your particular setup.

  • Save the changes and then close the file.

Step 5: Create file for variables.
  • In the same directory, create another file for variables and name it terraform.tfvars.

  • Create and assign the following variables:

    • For your site name, type a name within double quotes: site_name = "<site-name>"

    • For the AWS region, type the name within double quotes: aws_region = "<region>"

    • For the AWS region subtype, type the name within double quotes: aws_az = "<region-subtype>"

          site_name = "<site-name>"
aws_region = "<region>"
aws_az = "<region-subtype>"
        
Step 6: Create and export variables for credentials and secret keys.
  • In the terminal, create and export the following variables:

    • Create this variable and assign it your API credentials password: export VES_P12_PASSWORD=<credential password>

    • Create this variable and assign it the path to the API credential file previously created and downloaded from Console: export VOLT_API_P12_FILE=<path to your local p12 file>

    • Create this variable and assign it the URL for your tenant. For example: export VOLT_API_URL=https://example.console.ves.volterra.io/api

    • Create this variable and assign it your AWS secret key that has been encoded with Base64: export TF_VAR_b64_aws_secret_key=<base64 encoded value>

    • Create this variable and assign it your AWS access key: export TF_VAR_aws_access_key=<access key>

Note: You can also create and save these variables in the terraform.tfvars file. However, this may pose a security risk. Use caution when working with your credentials and secret keys.

          export VES_P12_PASSWORD=<credential password>
export VOLT_API_P12_FILE=<path to your local p12 file>
export VOLT_API_URL=https://example.console.ves.volterra.io/api
export TF_VAR_b64_aws_secret_key=<base64 encoded value>
export TF_VAR_aws_access_key=<access key>
        
Step 7: Initiate Terraform process.

Enter terraform init.

Step 8: Apply Terraform process.
  • Enter terraform apply.

  • If prompted for the access key and secret key encoded in Base64, enter both.

  • Enter yes to confirm. This may take a few minutes to complete. After the process is complete, the output will state Apply complete!.

  • In Console, navigate to the list of sites and confirm the site was applied.

Destroy Site

Perform the following procedure to destroy the site using Terraform:

  • Enter terraform destroy.

  • If prompted for the access key and secret key encoded in Base64, enter both.

  • Enter yes to confirm. This may take a few minutes to complete. After the process is complete, the output will state Destroy complete!.

  • In Console, navigate to the list of sites and confirm the site was destroyed.


Deploy Site Using Manual Mode with AWS as Terraform Provider

The F5 Distributed Cloud Services automation feature provides great value by performing the life cycle management of the customer edge (CE) instances and orchestrating other cloud resources, making it simple to connect and secure workloads across regions and clouds without a deep knowledge of public cloud networking constructs. However, many brownfield environments require the CEs to be deployed in a very customized environment that may be managed by other CI/CD automation tools. Also, a limited amount of enterprises have to refrain from configuring cloud credentials on their F5 Distributed Cloud Services Console because of company security regulations.

These use cases require customers to deploy the CE manually or using an automation tool of their choice. Distributed Cloud Services support manual CE deployments and provide public Terraform templates that use the public cloud’s Terraform provider to deploy the CE instances and configure related network objects. This provides you the flexibility of customizing the templates to suit your requirements in a brownfield or greenfield environment, and allows you to keep the cloud credentials on your local machine.

Community Support for Terraform

Terraform templates and examples hosted on F5 DevCentral are provided to help customers get started quickly and with ease without the need to know the various configurations required for deploying the CE on different public clouds.

The code in this repository is community-supported and is not supported by the F5 Networks enterprise license.

Important: For support with Terraform templates, open a GitHub issue in the GitHub repository that you used to deploy your site. Note that the code in this repository is community-supported and is not supported by F5.

For troubleshooting issues when deploying with AWS as a Terraform provider, see the Troubleshooting Manual Site Deployment Registration Issues guide.

Any issues encountered in the code will be addressed on a best-effort basis by the F5 Distributed Cloud Services solutions team, adhering to the community support SLAs.

Customers can work with the account team to get new topologies verified and added to the Terraform templates.

Prerequisites for AWS as Provider

  • A Distributed Cloud Services Account. If you do not have an account, see Create an Account.

  • Resources required per node: Minimum 4 vCPUs and 14 GB RAM.

  • If Internet Gateway (IGW) is attached with the VPC, at least one of the routes should point to IGW in any route table of the VPC.

  • Internet Control Message Protocol (ICMP) needs to be opened between the CE nodes on the Site Local Outside (SLO) interfaces. This is needed to ensure intra-cluster communication checks.

  • UDP port 6080 needs to be opened between all the nodes of the site.

  • For more prerequisites, see Requirements for list.

Procedure

Use the following link to access the Terraform templates GitHub repository for the respective public cloud provider:

Note: The Terraform templates use the Terraform provider for the public cloud where the CE is to be deployed. It does not use the Distributed Cloud Services Terraform provider to create the CE. The Distributed Cloud Services Terraform provider is used only to create a secure mesh site and for accepting the registration for the CE.

Each repository has a README page that lists the verified topologies and a link to the example template directory for the public cloud provider.

Each example directory also has a README file explaining the environment variables and user/environment-specific files required by the template. Before applying the Terraform template, you must also update the terraform.tfvars file in the directory with details specific to your use case/environment.

The table below provides some common variables and their expected values.

VariableValueDetails
f5xc_tenanttenant IDCopy the tenant ID from Administration > Tenant Settings > Tenant Overview > Tenant Information > Tenant ID on the F5XC console. For example: acmecorp-pxnxjsph
f5xc_api_urlhttps://<tenant-domain>/apiFor example: https://acmecorp.console.ves.volterra.io/api
f5xc_namespacesystemDo not change this.
f5xc_api_p12_filePath to the API certificate file.If the API certificate file is in the example directory, you can just put the name in the value field as Terraform automatically searches the current directory by default
ssh_public_key_filePath to the SSH public key file.Keep the corresponding private key saved as it will be required to log in to the CE using SSH if you need to debug any issues.
5xc_ce_gateway_typeingress_egress_gateway, ingress_gateway, voltstack_gatewayingress_egress_gateway – 2 NIC Mesh CE /ingress_gateway – 1 NIC Mesh CE / voltstack_gateway – 1 NIC AppStack CE

To create a service credential, see the Service Credentials guide for more information.

Note: All manually created CE sites must be registered as a secure mesh site. This can be done by creating a secure mesh site with the same name as configured on the CE before accepting the registration. See Create Secure Mesh Site for more details. The Terraform templates described above automatically create a secure mesh site for the CE.


Concepts