Create AWS Site

Objective

This guide provides instructions on how to create a site using F5® Distributed Cloud Console (Console) and deploy to Amazon Web Services (AWS). For more information on sites, see F5 Distributed Cloud Site.

You can deploy an F5® Distributed Cloud Services Site to AWS using one of the following methods:

Using the instructions provided in this guide, you can deploy an ingress gateway site or ingress/egress gateway site. For more information, see Network Topology of a Site.


Design

An AWS Virtual Private Cloud (VPC) site automates the deployment of sites in AWS. As part of the AWS VPC site configuration, you can indicate that new VPC, subnets, and route tables need to be created or specify existing VPC and subnet information. In case you specify existing VPC and subnet information, creation of VPC and subnet resources are skipped.

Note: By default, a Site deployed in AWS supports Amazon Elastic Block Store (EBS). You can configure storage within the Site creation form or using a Fleet. This document provides steps to configure storage using the Site configuration form. See Configure Storage in Fleet document for more information on using the Fleet method.

AWS VPC Site Deployment Types

A site can be deployed in three different modes:

  1. Ingress Gateway (One Interface): In this deployment mode, the site is attached to a single VPC and single Subnet. It can provide discovery of services and endpoints reachable from this subnet to any other site configured in the customer tenant.

  2. Ingress/Egress Gateway (Two Interfaces): In this deployment mode, the site is attached to a single VPC with at least two interfaces on different subnets. One subnet is labeled as Outside, and the other as Inside. In this mode, the site provides security and connectivity for VMs and subnets via default gateway through the Site Inside interface.

  3. F5® Distributed Cloud App Stack Cluster (App Stack) (One Interface): The F5® Distributed Cloud Mesh (Mesh) deployment and configuration of this site is identical to Ingress Gateway (One Interface). The difference with this deployment is the Certified Hardware Type being aws-byol-voltstack-combo. This configures and deploys an instance type that allows the site to have Kubernetes Pods, and VMs deployed using Virtual K8s.

Ingress Gateway (One Interface)

In this deployment mode, Mesh needs one interface attached. Services running on the node connect to the internet using this interface. Also, this interface is used to discover other services and virtual machines, and expose them to other sites in the same tenant. For example, in the below figure, TCP or HTTP services on the DevOps or Dev EC2 instances can be discovered and exposed via reverse proxy remotely.

As shown in the below figure, the interface is on the Outside subnet which is associated with the VPC main routing table whose default route is pointing to the internet gateway. That is how traffic coming from the outside interface can reach the internet, along with other subnets associated with this routing-table object. In case of other subnets (for example, Dev and DevOps), these are associated with the VPC main routing table. This means that any newly created subnet in this VPC is automatically associated with this routing table.

Figure: AWS VPC Site Deployment - Ingress Gateway (One Interface)
Figure: AWS VPC Site Deployment - Ingress Gateway (One Interface)

Ingress/Egress Gateway (Two Interfaces)

In this deployment scenario the Mesh nodes need two interfaces attached. The first interface is the outside interface through which services running on the node can connect to the internet. The second interface is the inside interface which will become the default gateway IP address for all the application workloads and services present in the private subnets.

As shown in the below figure, the outside interface is on the outside subnet which is associated with the outside subnet route table whose default route is pointing to the internet gateway. That is how traffic coming from the outside interface can reach the internet. In case of inside subnets these are associated with the inside subnet route table which is also the main route table for this VPC. This means that any newly created subnet in this VPC is automatically associated with the inside subnet route table. This private subnet route table has a default route pointing to the inside IP address of the Mesh node (192.168.32.186).

Figure: AWS VPC Site Deployment - Ingress/Egress Gateway (Two Interfaces) - Single AZ
Figure: AWS VPC Site Deployment - Ingress/Egress Gateway (Two Interfaces) - Single AZ

Once the Mesh site comes online, the inside network of the node will be connected to the outside network through a forward proxy and SNAT enabled on the outside interface. Such that all traffic coming on the inside interface will be forwarded to the internet over the forward proxy and SNAT happening on the outside interface. Now all the workloads on private subnets can reach the Internet through Mesh site.

App Stack Cluster (One Interface)

This scenario is identical to Ingress Gateway (One Interface) in terms of how the site networking and forwarding/security is configured. In addition to that, the App Stack is also made available (in other words, the Distributed Application Management Platform).

In this deployment scenario, the Mesh needs one interface attached. Services running on the node connect to the internet using this interface. Also, this interface is used to discover other services and virtual machines, and expose them to other sites in the same tenant. For example, in the below figure, TCP or HTTP services on the DevOps or Dev EC2 instances can be discovered and exposed via reverse proxy remotely.

If configured in a vK8s cluster, applications can be deployed onto this site’s App Stack offering. The services/pods of the site's App Stack can be exposed to other services and VMs on the VPC routing table; or made available externally via EIP or the Application Delivery Network (ADN).

As shown in the below figure, the interface is on the Outside subnet which is associated with the VPC main routing table whose default route is pointing to the internet gateway. That is how traffic coming from the outside interface can reach the Internet, along with other subnets associated with this routing-table object. In case of other subnets (for example, Dev and DevOps), these are associated with the VPC main routing table. This means that any newly created subnet in this VPC is automatically associated with this routing table.

Figure: AWS VPC Site Deployment - App Stack Cluster (One Interface)
Figure: AWS VPC Site Deployment - App Stack Cluster (One Interface)

Network Policies

The site can be your ingress/egress security policy enforcement point as all the traffic coming from private subnets will flow through Distributed Cloud Services site. Traffic that does not match the type defined in network policy is denied by default.

You can specify the endpoint or subnet using the network policy. You can define the egress policy by adding the egress rules from the point of endpoint to deny or allow specific traffic patterns based on intent. You can also add ingress rules to deny, or allow traffic coming towards the endpoint.

Forward Proxy Policy

Using a forward proxy policy, you can specify allowed/denied TLS domains or HTTP URLs. The traffic from workloads on private subnets towards the Internet via the site is allowed or denied accordingly.

More details on how to configure this is captured in the rest of this document.

AWS Direct Connect Orchestration

Direct Connect enables you to connect your on-premise data centers to a VPC in which the Distributed Cloud Services sites are hosted. Distributed Cloud Services automatically discovers the on-premise data center routes advertised by on-premise routers connected to AWS routers via Direct Connect. These routes will be learned on the inside network of the Site. There are two supported modes of Direct Connect private Virtual Interface (VIF).

Note: The prerequisite is that the Direct Connect connection is managed by the user.

Standard VIF: In this mode, site orchestration creates the Direct Connect gateway (DCGW) and Virtual Private Gateway (VGW). Ensure that you connect one or multiple VIFs to the DCGW.

Hosted VIF: In this mode, site orchestration accepts the configured list of VIFs delegated from the Direct Connect connection owner account to the hosted VIF acceptor account. You can set a list of VIF IDs to be accepted. The site orchestration then creates the DCGW, VGW, and connects the VIFs to the DCGW.

Direct connect orchestration only works with the Ingress/Egress Gateway (Two Interface) option for VPC site deployment.


Prerequisites

The following prerequisites apply:

  • A Distributed Cloud Services Account. If you do not have an account, see Create an Account.

  • An Amazon Web Services (AWS) Account. See Required Access Policies for permissions needed to deploy AWS VPC site.

  • Resources required per node: Minimum 4 vCPUs and 14 GB RAM.

  • There should be no pre-existing Site Local Outside, Site Local Inside, and Workload subnet association when attaching an existing VPC.

  • If Internet Gateway (IGW) is attached with the VPC, at least one of the routes should point to IGW in any route table of the VPC.

  • UDP port 6080 needs to be opened between all the nodes of the site.


Deploy Using Console

The following video shows the AWS VPC site object creation and deployment workflow using Console:

AWS VPC site object creation and deployment includes the following:

Phase Description
Create AWS VPC Object Create the VPC site object in Console using the guided wizard.
Deploy Site Deploy the site configured in the VPC object using automated method.

Create AWS VPC Site Object

Sites can be viewed and managed in multiple services: Multi-Cloud Network Connect, Distributed Apps, and Multi-Cloud App Connect.

This example shows Sites for AWS setup in Multi-Cloud Network Connect.

Step 1: Log into Console, start AWS VPC site object creation.
  • Open Console and click Multi-Cloud Network Connect.

Note: Homepage is role based, and your homepage may look different due to your role customization. Select All Services drop-down menu to discover all options. To customize settings: Administration > Personal Management > My Account > Edit work domain & skills > Advanced box > check Work Domain boxes > Save changes.

Figure: Console Homepage
Figure: Console Homepage

Note: Confirm Namespace feature is in correct namespace in upper-left corner. Not available in all services.

  • Click Manage > Site Management > AWS VPC Sites.

Note: If options are not showing available, select Show link in Advanced nav options visible in bottom left corner. If needed, select Hide to minimize options from Advanced nav options mode.

  • Click Add AWS VPC Site.

Site Management AWS
Figure: Site Management AWS

  • Enter Name, enter Labels and Description as needed.

AWS Site Set Up
Figure: AWS Site Set Up

Step 2: Configure VPC and site settings.
  • In the Site Type Selection section, perform the following:
Step 2.1: Set region and configure VPC.
  • Select a region from the AWS Region drop-down menu.

  • From the VPC menu, select an option:

    • New VPC Parameters: The Autogenerate VPC Name option is selected by default.

    • Existing VPC ID: Enter existing VPC ID in Existing VPC ID box.

Note: If you are using an existing VPC, enable the enable_dns_hostnames box in the existing VPC configuration.

  • Enter the CIDR in the Primary IPv4 CIDR block field.

Figure: VPC and Node Type Configuration
Figure: VPC and Node Type Configuration

Step 2.2: Set the node configuration.

From the Select Ingress Gateway or Ingress/Egress Gateway menu, select an option.

Configure Ingress Gateway.

For the Ingress Gateway (One Interface) option:

  • Click Configure.

  • Click Add Item.

  • Select an option from the AWS AZ Name menu that matches the configured AWS Region.

  • Select New Subnet or Existing Subnet ID from the Subnet for local Interface menu.

  • Enter subnet address in IPv4 Subnet, or subnet ID in Existing Subnet ID.

  • Confirm subnet is part of the CIDR block set in the previous step.

  • Toggle Show Advanced Fields in the Allowed VIP Port Configuration section and configure VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site.

  • From the Select Which Ports will be Allowed menu, select an option:

    • Allow HTTP Port: Allows only port 80.

    • Allow HTTPS Port: Allows only port 443.

    • Allow HTTP & HTTPS Port: Allows only ports 80 and 443. This is populated by default.

    • Ports Allowed on Public: Allows specifying custom ports or port ranges. Enter port or port range in the Ports Allowed on Public field.

  • From the Select Performance Mode menu, select an option:

    • L7 Enhanced: This option optimizes the site for Layer 7 traffic processing.

    • L3 Mode Enhanced Performance: This option optimizes the site for Layer 3 traffic processing.

Note: The AWS Certified Hardware is set to aws-byol-voltmesh by default. You can add more than one node using the Add item option.

Configure Ingress/Egress Gateway.

For the Ingress/Egress Gateway (Two Interface) option:

  • Click Configure to open the two-interface node configuration.

  • Click Add Item.

  • Select an option from the AWS AZ Name menu that matches the configured AWS Region.

  • From the Workload Subnet menu, select an option:

    • New Subnet: Enter a subnet in the IPv4 Subnet field.

    • Existing Subnet ID: Enter a subnet in the Existing Subnet ID field.

Note: Workload subnet is the network where your application workloads are hosted. For successful routing toward applications running in workload subnet, an inside static route to the workload subnet CIDR needs to be added on the respective site object.

  • From the Subnet for Outside Interface menu, select an option:

    • New Subnet: Enter a subnet in the IPv4 Subnet field.

    • Existing Subnet ID: Enter a subnet in the Existing Subnet ID field.

  • Click Add Item.

  • In the Site Network Firewall section, optionally select Active Firewall Policies from the Manage Firewall Policy menu.

  • Select an existing firewall policy, or select Create new Firewall Policy to create and apply a firewall policy.

  • After creating the policy, click Continue to apply.

  • From the Manage Forward Proxy menu, select an option:

    • Disable Forward Proxy

    • Enable Forward Proxy with Allow All Policy

    • Enable Forward Proxy and Manage Policies: Select an existing forward proxy policy, or select Create new forward proxy policy to create and apply a forward proxy policy.

  • After creating the policy, click Apply.

Network Firewall Configuration for Node
Figure: Network Firewall Configuration for Node

  • Enable Show Advanced Fields in the Advanced Options section.

  • Select Connect Global Networks from the Select Global Networks to Connect menu, and perform the following:

    • Click Add Item.

    • From the Select Network Connection Type menu, select an option:

    • From the Global Virtual Network menu, select an option.

    • Click Add Item.

  • From the Select DC Cluster Group menu, select an option to set your site in a DC cluster group:

    • Not a Member of DC Cluster Group: Default option.

    • Member of DC Cluster Group via Outside Network: Select the DC cluster group from the Member of DC Cluster Group via Outside Network menu to connect your site using an outside network.

    • Member of DC Cluster Group via Inside Network: Select the DC cluster group from the Member of DC Cluster Group via Inside Network menu to connect your site using an inside network.

Note: For more information, see the Configure DC Cluster Group guide.

  • Select Manage Static routes from the Manage Static Routes for Inside Network menu. Click Add Item in the List of Static Routes subsection. Perform one of the following steps:

    • Select Simple Static Route and then enter a static route in the Simple Static Route field.

    • Select Custom Static Route and then click Configure. Perform the following steps:

      • In the Subnets section, click Add Item. Select IPv4 or IPv6 option from the Version menu. Enter a prefix and a prefix length for your subnet. You can use the Add item option to set more subnets.

      • In the Nexthop section, select a next-hop type from the Type menu. Select IPv4 or IPv6 from the Version menu in the Address subsection. Enter an IP address. From the Network Interface menu, select an option.

      • In the Static Route Labels section, select Add label and follow the prompt.

      • In the Attributes section, select supported attributes from the Attributes menu. You can select more than one option.

      • Click Apply.

  • Select Manage Static routes from the Manage Static Routes for Outside Network menu. Select Add Item. Follow the same procedure of managing the static routes for the Manage Static Routes for Outside Network menu.

  • Set Allowed VIP Port Configuration for Outside Network and Allowed VIP Port Configuration for Inside Network. This is required for the load balancer to distribute traffic among all nodes in a multi-node site.

  • In the Allowed VIP Port Configuration for Outside Network section, perform the following:

    • Select an option from the Select Which Ports will be Allowed menu:

    • Allow HTTP Port: Allows only port 80.

    • Allow HTTPS Port: Allows only port 443.

    • Allow HTTP & HTTPS Port: Allows only ports 80 and 443. This is populated by default.

    • Ports Allowed on Public: Allows specifying custom ports or port ranges. Enter port or port range in the Ports Allowed on Public field.

  • In the Allowed VIP Port Configuration for Inside Network section, perform the same procedure as that of the outside network above.

  • From the Select Performance Mode menu, select an option:

    • L7 Enhanced: This option optimizes the site for Layer 7 traffic processing.

    • L3 Mode Enhanced Performance: This option optimizes the site for Layer 3 traffic processing.

  • Click Apply.

Note: The AWS Certified Hardware is set to aws-byol-multi-nic-voltmesh by default. You can add more than one node using the Add item option.

Configure App Stack Cluster (One Interface).

For the App stack Cluster (One Interface) option:

  • Click Configure to open the configuration form.

  • In the App Stack Cluster (One Interface) Nodes in AZ section, click Add Item. Perform the following:

    • Select an option from the AWS AZ Name menu that matches the configured AWS Region.

    • Select New Subnet or Existing Subnet ID from the Subnet for local Interface menu.

    • Enter a subnet address in IPv4 Subnet or subnet ID in Existing Subnet ID.

    • Click Add Item.

  • In the Site Network Firewall section:

    • Optionally select Active Firewall Policies from the Manage Firewall Policy menu. Select an existing firewall policy, or select Create new Firewall Policy to create and apply a firewall policy.

    • Optionally select Enable Forward Proxy with Allow All Policy or Enable Forward Proxy and Manage Policies from the Manage Forward Proxy menu. For the latter option, select an existing forward proxy policy, or select Create new forward proxy policy to create and apply a forward proxy policy.

  • In the Advanced Options section, enable the Show Advanced Fields option.

  • Select Connect Global Networks from the Select Global Networks to Connect menu, and perform the following:

    • Click Add Item.

    • From the Select Network Connection Type menu, select a connection type.

    • From the Global Virtual Network menu, select a global network from the list of networks displayed.

    • To create a new global network, click Create new virtual network from the Global Virtual Network:

      • Complete the form information.

      • Click Continue.

      • Click Add Item.

    • Select Manage Static routes from the Manage Static Routes for Site Local Network menu.

    • Click Add Item and perform one of the following steps:

      • From the Static Route Config Mode menu, select Simple Static Route. Enter a static route in Simple Static Route field.

      • From the Static Route Config Mode menu, select Custom Static Route. Click Configure. Perform the following steps:

        • In Subnets section, click Add Item. Select IPv4 Subnet or IPv6 Subnet from the Version menu.

        • Enter a prefix and prefix length for your subnet.

        • Use the Add Item option to set more subnets.

      • In Nexthop section, select a next-hop type from the Type menu.

      • Select IPv4 Address or IPv6 Address from the Version menu.

      • Enter an IP address.

      • From the Network Interface menu, select a network interface or select Create new network interface to create and apply a new network interface.

      • In the Attributes section, select supported attributes from the Attributes menu. You can select more than one option.

      • Click Apply to add the custom route.

      • Click Add Item.

      • Click Apply.

  • From the Select DC Cluster Group menu, select an option to set your site in a DC cluster group:

    • Not a Member: Default option.

    • Member of DC Cluster Group: Select the DC cluster group from the Member of DC Cluster Group menu to connect your site using an outside network.

  • From the Allowed VIP Port Configuration menu, configure the VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site.

  • From the Site Local K8s API access menu, select an option for API access. For instructions on K8s cluster creation, see Create K8s Cluster.

Note: The Distributed Cloud Platform supports both mutating and validating webhooks for managed K8s. Webhook support can be enabled in the K8s configuration (Manage > Manage K8s > K8s Clusters). For more information, see Create K8s Cluster in the Advanced K8s cluster security settings section.

  • In the Storage Configuration section, enable the Show Advanced Fields option.

  • From the Select Configuration for Storage Classes menu, select Add Custom Storage Class.

  • Click Add Item.

  • In the Storage Class Name field, enter a name for the storage class as it will appear in Kubernetes.

  • Optionally, enable the Default Storage Class option to make this new storage class the default class for all clusters.

  • In the Storage Device section:

    • In the Replication field, enter a number to set the replication factor for the PV.

    • From the Storage Size field, set the storage in gigabyte (GB) for each node.

  • Click Add Item.

  • Click Apply.

Note: The AWS Certified Hardware is set to aws-byol-voltstack-combo by default. You can add more than one node using the Add Item option.

Step 2.3: Set the Internet VIP choice.
  • Use the Internet VIP choice drop-down menu to either Enable Internet VIP (create an Internet VIP) or Disable Internet VIP (don't create one—the default).

VPC Internet VIP Configuration
Figure: VPC Internet VIP Configuration

Note: You must enable Internet VIP on the AWS cloud site if you want clients to access the Load Balancer VIP directly from the internet. F5XC will orchestrate an AWS internet-facing NLB, causing traffic to be equally distributed to all CE nodes on the site.

You will also need to create an HTTP load balancer with a custom VIP advertisement. Use Site or Virtual Site advertising with Site Network set to either Outside Network with internet VIP or Inside and Outside Network with internet VIP. For more information, see HTTP Load Balancer.

Step 2.4: Set the deployment type.
  • From the Automatic Deployment drop-down menu, select Automatic Deployment:

    • Select an existing AWS credentials object, or click Create new Cloud Credential to load form.
  • To create new credentials:

    • Enter Name, Labels, and Description as needed.

    • From the Select Cloud Credential Type menu, select AWS Programmatic Access Credentials.

    • Enter AWS access ID in the Access Key ID field.

    • Click Configure in the Secret Access Key field.

    • From the Secret Info menu:

      • Blindfold Secret: Enter secret in the Type box.

      • Clear Secret: Enter secret in Clear Secret box in either Text or base64(binary) formats.

      • Click Apply.

    • Click Continue to add the new credentials.

Note: Refer to the Cloud Credentials guide for more information. Ensure that the AWS credentials are applied with required access policies per the Policy Requirements document.

Deployment Configuration
Figure: Deployment Configuration

Step 3: Set the site node parameters.
  • In the Site Node Parameters section, enable the Show Advanced Fields option. Optionally, add a geographic address and enter the latitude and longitude values.

  • From the AWS Instance Type for Node menu, select an option.

  • Optionally, enter your SSH key in the Public SSH key box.

Site Node Parameters Configuration
Figure: Site Node Parameters Configuration

Step 4: Configure advanced configuration options.
  • In the Advanced Configuration section, enable the Show Advanced Fields option.

  • From the Logs Streaming menu, select an option.

  • From the Select F5XC Software Version menu, select an option.

  • From the Select Operating System Version menu, select an option.

  • From the Desired Worker Nodes Selection menu, select an option:

    • Enter the number of worker nodes in the Desired Worker Nodes Per AZ field. The number of worker nodes you set here will be created per the availability zone in which you created nodes. For example, if you configure three nodes in three availability zones, and set the Desired Worker Nodes Per AZ box as 3, then 3 worker nodes per availability zone are created and the total number of worker nodes for this AWS VPC site will be 9.

AWS VPC Advanced Configuration
Figure: AWS VPC Advanced Configuration

  • From the Direct Connect Choice drop-down menu, select an option:

    • Disable Direct Connect: Default option.

    • Enable Direct Connect: Select to configure for the AWS site.

  • Click Configure.

  • From the VIF Configuration drop-down menu, select an option for the Virtual Interface (VIF):

    • Hosted VIF mode: With this mode, F5 will provision an AWS Direct Connect Gateway and a Virtual Private Gateway. The hosted VIP you provide will be automatically associated and will set up BGP peering.

    • Standard VIF mode: With this mode, F5 will provision an AWS Direct Connect Gateway and a Virtual Private Gateway, a user associate VIP, and will set up BGP peering.

  • For the Hosted VIF mode option:

    • Click Add item.

    • Enter a list of VIF IDs.

    • Click Apply.

  • For the Standard VIF mode option:

    • Click Apply.
Step 5: Configure blocked services.

You can have your site block services, like Web, DNS, and SSH.

  • In the Select to Configure Blocked Services section, select Custom Blocked Services Configuration from the Select to Configure Blocked Services menu.

  • Click Add Item.

Add Blocked Service
Figure: Add Blocked Service

  • From the Blocked Services Value Type menu, select the service to block:

    • Web UI port

    • DNS port

    • SSH port

  • From the Network Type menu, select the type of network in which this service is blocked from your site.

  • Click Add Item.

Select Blocked Service
Figure: Select Blocked Service

Step 6: Complete the AWS VPC site object creation.

Click Save and Exit to complete creating the AWS VPC site.

The Status box for the VPC site object displays Generated.

AWS VPC Object Generated
Figure: AWS VPC Object Generated


Deploy Site

Creating the AWS site VPC object in Console generates the Terraform parameters.

Note: Site upgrades may take up to 10 minutes per site node. Once site upgrade has completed, you must apply the Terraform parameters to site via Action menu on cloud site management page.

  • Navigate to the created AWS VPC object using the Manage > Site Management > AWS VPC Sites option.

  • Find your AWS VPC object and click Apply in the Actions column.

AWS VPC Object Apply
Figure: AWS VPC Object Apply

The Status field for the AWS VPC object changes to Apply Planning.

Note: Optionally, you can perform Terraform plan activity before the deployment. Find your AWS VPC site object and select ... > Plan (Optional) to start the action of Terraform plan. This creates the execution plan for Terraform.

  • Wait for the apply process to complete and the status to change to Applied.

AWS VPC Object Applied
Figure: AWS VPC Object Applied

  • To check the status for the apply action, click ... > Terraform Parameters for your AWS VPC site object, and select the Apply Status tab.

  • To locate your site, click Sites > Sites List.

  • Verify status is Online. It takes a few minutes for the site to deploy and status to change to Online.

Note: You can log into your node’s command-line interface (CLI) via SSH with username centos and your private key.

Site Status Online
Figure: Site Status Online

Note: For ingress/App Stack sites: When you update worker nodes for a site object, scaling happens automatically. For ingress/egress sites: When you update worker nodes for a site object, the Terraform Apply button is enabled. Click Apply.


Delete VPC Site

Perform the following to delete the site:

  • Navigate to Manage > Site Management > AWS VPC Sites for the AWS VPC site object.

  • Locate your AWS VPC object.

  • Select ... > Delete.

  • Click Delete in pop-up confirmation window.

Note: Deleting the VPC object deletes the sites and nodes from the VPC and deletes the VPC. In case the delete operation does not remove the object and returns any error, check the error from the status, fix the error, and re-attempt the delete operation. If the problem persists, contact technical support. You can check the status using the ... > Terraform Parameters > Apply status option.


Deploy Site Using Terraform

This chapter provides instructions on how to create a single-node or multi-node site on Amazon Elastic Compute Cloud (EC2) using a custom Amazon Machine Image (AMI) with Terraform.

Perform the following procedure to deploy a site using Terraform:

Step 1: Confirm Terraform is installed.

In a terminal, enter terraform version. If you need to install, follow the instructions at the official guide.

Step 2: Create API credentials file.

Log into Console and create an API 12 certificate file and then download it. Use the instructions at Credentials for more help.

Step 3: Create a new directory on your system to place files for deployment.

Create a new directory on your system to place files for deployment.

Step 4: Create the deployment file.
  • Create a file and name it main.tf file, and place it in the newly created directory.

  • Copy and paste the following information into the file:

terraform {
  required_version = ">= 0.13.1"

  required_providers {
    volterra = {
      source = "volterraedge/volterra"
    }
  }
}

variable "site_name" {}

variable "aws_access_key" {}

variable "b64_aws_secret_key" {}

variable "aws_region" {
  default = "us-east-2"
}

variable "aws_vpc_cidr" {
  default = "192.168.0.0/20"
}

variable "aws_az" {
  default = "us-east-2a"
}

variable "outside_subnet_cidr_block" {
  default = "192.168.0.0/25"
}

resource "volterra_cloud_credentials" "aws_cred" {
  name      = format("%s-cred", var.site_name)
  namespace = "system"
  aws_secret_key {
    access_key = var.aws_access_key
    secret_key {
      clear_secret_info {
        url = format("string:///%s", var.b64_aws_secret_key)
      }
    }
  }
}

resource "volterra_aws_vpc_site" "site" {
  name       = var.site_name
  namespace  = "system"
  aws_region = var.aws_region
  aws_cred {
    name      = volterra_cloud_credentials.aws_cred.name
    namespace = "system"
  }
  instance_type = "t3.xlarge"
  vpc {
    new_vpc {
      name_tag     = var.site_name
      primary_ipv4 = var.aws_vpc_cidr
    }
  }
  ingress_gw {
    aws_certified_hw = "aws-byol-voltmesh"
    az_nodes {
      aws_az_name = var.aws_az
      disk_size   = 20
      local_subnet {
        subnet_param {
          ipv4 = var.outside_subnet_cidr_block
        }
      }
    }
  }
  logs_streaming_disabled = true
  no_worker_nodes         = true
}

resource "volterra_tf_params_action" "apply_aws_vpc" {
  site_name        = volterra_aws_vpc_site.site.name
  site_kind        = "aws_vpc_site"
  action           = "apply"
  wait_for_action  = true
  ignore_on_update = true
}
  • Open the file and configure any necessary fields. The example above is for an ingress gateway site. You can change the parameters for your particular setup.

  • Save the changes and then close the file.

Step 5: Create file for variables.
  • In the same directory, create another file for variables and name it terraform.tfvars.

  • Create and assign the following variables:

    • For your site name, type a name within double quotes: site_name = "<site-name>"

    • For the AWS region, type the name within double quotes: aws_region = "<region>"

    • For the AWS region subtype, type the name within double quotes: aws_az = "<region-subtype>"

site_name = "<site-name>"
aws_region = "<region>"
aws_az = "<region-subtype>"
Step 6: Create and export variables for credentials and secret keys.
  • In the terminal, create and export the following variables:

    • Create this variable and assign it your API credentials password: export VES_P12_PASSWORD=<credential password>

    • Create this variable and assign it the path to the API credential file previously created and downloaded from Console: export VOLT_API_P12_FILE=<path to your local p12 file>

    • Create this variable and assign it the URL for your tenant. For example: export VOLT_API_URL=https://example.console.ves.volterra.io/api

    • Create this variable and assign it your AWS secret key that has been encoded with Base64: export TF_VAR_b64_aws_secret_key=<base64 encoded value>

    • Create this variable and assign it your AWS access key: export TF_VAR_aws_access_key=<access key>

Note: You can also create and save these variables in the terraform.tfvars file. However, this may pose a security risk. Use caution when working with your credentials and secret keys.

export VES_P12_PASSWORD=<credential password>
export VOLT_API_P12_FILE=<path to your local p12 file>
export VOLT_API_URL=https://example.console.ves.volterra.io/api
export TF_VAR_b64_aws_secret_key=<base64 encoded value>
export TF_VAR_aws_access_key=<access key>
Step 7: Initiate Terraform process.

Enter terraform init.

Step 8: Apply Terraform process.
  • Enter terraform apply.

  • If prompted for the access key and secret key encoded in Base64, enter both.

  • Enter yes to confirm. This may take a few minutes to complete. After the process is complete, the output will state Apply complete!.

  • In Console, navigate to the list of sites and confirm the site was applied.


Destroy Site

Perform the following procedure to destroy the site using Terraform:

  • Enter terraform destroy.

  • If prompted for the access key and secret key encoded in Base64, enter both.

  • Enter yes to confirm. This may take a few minutes to complete. After the process is complete, the output will state Destroy complete!.

  • In Console, navigate to the list of sites and confirm the site was destroyed.


Concepts


API References