Create AWS Site

Objective

This guide provides instructions on how to create and deploy a Virtual Private Cloud (VPC) site to Amazon Web Services (AWS).

You can deploy a VPC site if the workload you want to secure, connect, or load balance is present on or will be deployed in the same VPC. You may also choose to deploy a VPC site if you want to use the F5® managed Kubernetes offering by creating an App Stack site.

If you want to secure, connect, or load balance workloads across multiple VPCs in an AWS region from a single service VPC, see Create AWS Site with TGW site.

You can deploy an AWS VPC site using one of the following methods:


Overview

F5® Distributed Cloud supports AWS VPC site creation in both greenfield and brownfield environments. In a greenfield environment, it can automate the creation of a VPC, subnets, route tables, security groups, and other networking features. In a brownfield environment, you can choose existing resources from your VPC or choose to create new resources using the site creation wizard.

Clustering of Customer Edge Nodes

The AWS VPC site can be deployed as a single node or as a three-node site. These nodes host the control processes, data plane processes, and the Internet Protocol Security (IPsec) connections to the Regional Edges (REs). For production deployments, a three-node site is recommended as it provides high availability. Additional worker nodes can also be deployed after the site is created for additional capacity for L7 features, like load balancing, web app and API protection (WAAP), and more.

Note: Worker nodes are only supported for three-node sites.

AWS VPC Site Deployment Modes

A site can be deployed in three different modes, as explained below. For a generic network topology of a site, see Network Topology of a Site.

Ingress Gateway (One Interface) This mode can provide discovery of services and endpoints reachable from this subnet and deliver it to any other site in the customer tenant. The site can provide TCP or HTTP load balancing and L7 security services for the apps that it discovers on the VPC. This mode can be used for application delivery or IP address overlap use cases. Since this mode functions only as an ingress gateway, it cannot provide L3 connectivity to networks on other sites in the tenant. Also, this mode can provide Secure Kubernetes Gateway (SKG) functionality to Kubernetes clusters deployed on the same VPC.

The CE node is attached to a single subnet called Site Local Outside (SLO), per Availability Zone (AZ), in a VPC. For a three-node cluster, the nodes are distributed across three AZs. The VPC route table associated with SLO must point the default route to the Internet Gateway (IGW). This ensures that the CE can reach the public Internet and open the IPsec connections to the REs. The default configuration is to use an Internet Gateway for this, but NAT Gateway and Virtual Private Gateway are also supported egress gateway options.

A one interface three-node deployment diagram is shown for simplicity.

Figure
Figure: Ingress Gateway (One Interface) for Three-Node AWS VPC Site

The workloads can be deployed on a private subnet with default route pointing to the NAT Gateway or on a public subnet with default route pointing to the Internet Gateway. These subnets and routes are not auto-created.

Ingress/Egress Gateway (Two Interfaces): In this mode, the CE functions as a default gateway for the VPC. It can provide egress connectivity to the public Internet and L3 connectivity from the instances and subnets on the VPC to subnets on other sites, if a global network is configured. It can also provide the load balancing, Secure Kubernetes Gateway (SKG) and other L7 security features, like ingress gateway mode.

A two interface three-node deployment diagram is shown for simplicity.

Figure
Figure: Ingress/Egress Gateway (Two Interfaces) for Three-Node AWS VPC Site

With this mode, the CE node is attached with at least two interfaces on different subnets per Availability Zone (AZ) on a VPC. One subnet is labeled as Site Local Outside (SLO), and the other as Site Local Inside (SLI). The automation also creates a workload subnet, which can be used to deploy application instances. The VPC SLO route table points to the IGW as the default gateway (NAT gateway and Virtual Private Gateway are also supported). This provides Internet connectivity to the CE nodes. The SLI and the workload route table in the VPC points the default route to the SLI interface of the CE in each AZ.

F5® Distributed Cloud App Stack Cluster (App Stack) (One Interface): This deployment mode must be used if you want to use the site as a F5-managed Kubernetes site to host your applications. The site mode also provides a Secure Kubernetes Gateway (SKG) for the applications deployed on it by discovering the services, providing load balancing and L7 security features like WAAP, Bot Detection, DDoS protection, and more.

A one interface three-node deployment diagram is shown for simplicity.

Figure
Figure: App Stack Cluster (One Interface) for Three-Node AWS VPC Site

The deployment and configuration of this site mode is identical to Ingress Gateway (One Interface). The difference with this deployment is that the certified hardware Type is aws-byol-voltstack-combo. This configures and deploys an instance type that allows the site to have Kubernetes pods or VMs deployed on the site using Kubernetes API (via managed K8s or virtual K8s endpoints).

Private Connectivity

Private connectivity enables you to privately connect your on-premises data centers to a VPC in which the Distributed Cloud Services sites are hosted so that the traffic does not flow over public Internet. This also gives the option to configure connectivity to REs and site registration to go over the private connection.

Note: The private connectivity option is only supported for Ingress/Egress Gateway (Two Interfaces) option for a VPC site.

There are two private connectivity options available: AWS Direct Connect (Legacy) and CloudLink.

AWS Direct Connect (Legacy): Distributed Cloud Services can orchestrate AWS Direct Connect to a VPC site. The automation orchestrates the creation of the Virtual Private Gateway and Direct Connect Gateway (DCGW) in addition to the regular site creation. The prerequisite is that the Direct Connect connection is created and managed by the user.

The on-premises data center routes are advertised by the on-premises routers connected to AWS routers via Direct Connect. These routes are propagated to the VGW by the Direct Connect Gateway (DCGW). VGW configures these routes on the VPC route table from where it is learned on the inside network of the site.

There are two supported modes of Direct Connect private Virtual Interface (VIF):

  • Standard VIF: In this mode, the whole Direct Connect connection is used for the site. After site creation, you must associate the VIF on the DCGW using AWS Console and configure BGP peering.

  • Hosted VIF: In this mode, site orchestration accepts the configured list of VIFs delegated from the Direct Connect connection owner account to the hosted VIF acceptor account. You can set a list of VIF IDs to be accepted. The site orchestration automates the association of the VIFs to the DCGW.

CloudLink: A CloudLink allows Distributed Cloud Services to orchestrate an already provisioned direct connection, establish a multi-cloud networking fabric, and then connect, deliver, secure, and operate networks and apps across hybrid environments. For more information, see CloudLink.


Site Status Descriptions

These descriptions provide information for the various stages of site deployment in Distribute Cloud Console. They also provide information to help you troubleshoot errors that may occur during the deployment and registration stages.

PLANNING: Site resources are being planned for creation.

PLAN_INIT_ERRORED: Planning of site resources failed at init stage.

PLAN_ERRORED: Planning of site failed with errors.

PLAN_QUEUED: Planning of site resources queued to be implemented.

APPLIED: Site resources are created, and site is waiting to come online.

APPLY_ERRORED: Creation of site resources failed with errors.

APPLY_INIT_ERRORED: Creation of site resources failed with errors at initial stage.

APPLYING: Site creation is in progress.

APPLY_PLANNING: Site resources are being planned.

APPLY_PLAN_ERRORED: Planning of site failed with errors.

APPLY_QUEUED: Creation of site resources queued to be implemented.

DESTROYED: Site resources are destroyed and site is OFFLINE.

DESTROY_ERRORED: Destroying of site resources failed with errors.

DESTROYING: Destroying of site resources in progress.

DESTROY_QUEUED: Destroying of site resources queued to be destroyed.

GENERATED: Site Object created in F5DC database as per configuration.

TIMED_OUT: Creation/Destroying of site resources is failed with a timeout.

ERRORED: Creation/Destroying of site resources is failed with errors.

PROVISIONING: Site resources are created and waiting for site to come online.


Prerequisites

The following prerequisites apply:

General

  • A Distributed Cloud Services Account. If you do not have an account, see Create an Account.

  • An AWS Account. See Required Access Policies for permissions needed to deploy site. To create a cloud credentials object, see Cloud Credentials.

  • Resources required per node:

    • vCPUs: Minimum 4 vCPUs.
    • Memory: 14 GB RAM.
    • Disk storage:
      • Minimum 45 GB for Mesh site.
      • Minimum 100 GB for App Stack site.
  • Instance type with Intel x86-based processor. ARM and Mac instances are not supported. Recommended instance types are:

    • t3.xlarge (4 vCPU, 16 GB RAM)

    • t3.2xlarge (8 vCPU, 32 GB RAM)

    • m5.4xlarge (16 vCPU, 64 GB RAM)

  • Internet Control Message Protocol (ICMP) needs to be opened between the CE nodes on the Site Local Outside (SLO) interfaces. This is needed to ensure intra-cluster communication checks.

Existing VPC

VPC ID, subnet IDs and AZ to be used for the deployment.

The existing subnets selected for Site Local Outside, Site Local Inside, and Workload subnets must not have an explicit association with any route tables. New route tables will be created and associated with these subnets. The deployment will fail if the subnets have existing custom route table associations.

Manually Created Site

The configurations below are created by Distributed Cloud Services automation. But for manually created sites using Terraform, you must address the following conditions for the site to get deployed correctly:

  • Security group on the SLO interface must allow outgoing traffic to the internet.

  • UDP port 6080 needs to be opened between all the nodes of the site for inter-node tunnel. See the Firewall or Proxy Reference for Network Cloud guide for the complete list of IPs, ports, and URLs that a CE needs to connect with.

Private Connectivity

  • Direct Connect connection.

  • VIF IDs if you are using hosted VIF mode.


Deploy Using Console

The following video shows the AWS VPC site object creation and deployment workflow using Console:

AWS VPC site creation using Console is a two-step process:

  1. Create a VPC site object on Console, where you provide the necessary configurations using a guided wizard.

  2. Deploy site, where you initiate the automation which deploys the site resources on AWS.

Create AWS VPC Site Object

The wizard to create the AWS VPC site object guides you through the steps for required configuration.

Step 1: Start site object creation.
  • Log into Console.

  • From the Console homepage, select Multi-Cloud Network Connect.

Figure
Figure: Console Homepage
  • Click Manage > Site Management > AWS VPC Sites.

  • Click Add AWS VPC Site.

  • Enter Name, enter Labels, and Description as needed.
Figure
Figure: AWS Site Set Up
Step 2: Select cloud credentials.

Refer to the Cloud Credentials guide for more information. Ensure that the AWS credentials are applied with required access policies per the Policy Requirements document.

  • From the Cloud Credentials menu in the Site Type Selection section, select an existing AWS credentials object, or click Add Item to load form.
Figure
Figure: Deployment Configuration
  • To create new credentials:

    • Enter Name, Labels, and Description as needed.

    • From the Select Cloud Credential Type menu, select AWS Programmatic Access Credentials.

    • Enter AWS access ID in the Access Key ID field.

    • Click Configure in the Secret Access Key field.

    • From the Secret Type menu:

      • Blindfold Secret: Enter secret in the Type box.

      • Clear Secret: Enter secret in Clear Secret box in either Text or Base64 formats.

      • Click Apply.

    • Click Continue to add the new credentials.

Step 3: Configure AWS region and VPC parameters.
  • From the AWS Region drop-down menu, select a region.

  • From the VPC menu, select an option:

    • New VPC Parameters: The Autogenerate VPC Name option is selected by default.

    • Existing VPC ID: Enter existing VPC ID in Existing VPC ID box. If you are using an existing VPC, ensure that you enable the Enable DNS hostnames checkbox in AWS Management Console (under Edit VPC settings).

Note: If you are deploying a new AWS VPC site into an existing VPC, the deployment will fail if the AWS subnet has the EC2 hostname type set to the resource name. In AWS Management Console, ensure that the hostname is in ip-* format in the subnet settings.

  • In the Primary IPv4 CIDR block field, enter the CIDR subnet with slash notation.
Figure
Figure: VPC and Node Type Configuration
Step 4: Set and configure VPC interface.
  • From the Select Ingress Gateway or Ingress/Egress Gateway menu, select an option:

    • Ingress Gateway (One Interface)

    • Ingress/Egress Gateway (Two Interface)

    • App Stack Cluster (One Interface)

Ingress Gateway (one interface)

For the Ingress Gateway (One Interface) option:

  • Click Configure.

  • Click Add Item.

Note: Either a single master node site or a multi-node site with three (3) master nodes is supported. Therefore, if you are adding more than one node, ensure that there are three (3) master nodes for your site. Use Add Item to add more master nodes.

  • From the AWS AZ Name menu, click See Suggestions to select an option that matches the configured AWS Region.

  • From the Subnet for local Interface menu, select New Subnet or Existing Subnet ID.

Note: New subnet creation is not supported for a brownfield deployment with an existing VPC selected for the site. In this case, you must provide an existing subnet ID.

  • Enter subnet address in IPv4 Subnet, or subnet ID in Existing Subnet ID.

  • Confirm subnet is part of the CIDR block set in the previous step.

  • Click Apply.

  • From the Allowed VIP Port Configuration menu, configure VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site. See the following options:

    • Disable Allowed VIP Port: Ports 80 and 443 will be not allowed.

    • Allow HTTP Port: Allows only port 80.

    • Allow HTTPS Port: Allows only port 443.

    • Allow HTTP & HTTPS Port: Allows only ports 80 and 443. This is populated by default.

    • Ports Allowed on Public: Allows specifying custom ports or port ranges. Enter port or port range in the Port Ranges field.

  • In the Advanced Options section, enable the Show Advanced Fields option.

  • From the Performance Mode menu, select an option:

    • L7 Enhanced: This option optimizes the site for Layer 7 traffic processing.

    • L3 Mode Enhanced Performance: This option optimizes the site for Layer 3 traffic processing. Only choose this option if the site is used for L3 connectivity and not any L7 features. Select whether to use this feature with or without jumbo frames.

Note: The L3 Mode Enhanced Performance feature works on CE sites with a minimum of 5 cores and a minimum of 3 GB memory.

Jumbo frames (Ethernet frames with a larger payload than the Ethernet standard maximum transmission unit of 1,500 bytes) are supported for L3 Mode Enhanced Performance.

If L3 Mode Enhanced Performance is not enabled on all CE sites in a Site Mesh Group, the MTU configured on the site-to-site tunnel interfaces will not be consistent. Therefore, it is recommended to enable L3-focused performance mode on all sites participating in a Site Mesh Group.

  • Click Apply.
Ingress/Egress Gateway (two interfaces)

For the Ingress/Egress Gateway (Two Interface) option:

  • Click Configure.

  • Click Add Item.

Note: Either a single master node site or a multi-node site with three (3) master nodes is supported. Therefore, if you are adding more than one node, ensure that there are three (3) master nodes for your site. Use Add Item to add more master nodes.

  • From the AWS AZ Name menu, select an option that matches the configured AWS Region.

  • From the Workload Subnet menu, select an option:

    • New Subnet: Enter a subnet in the IPv4 Subnet field.

    • Existing Subnet ID: Enter a subnet in the Existing Subnet ID field.

Note: Workload subnet is the network where your application workloads are hosted. For successful routing toward applications running in workload subnet, an inside static route to the workload subnet CIDR needs to be added on the respective site object.

  • From the Subnet for Outside Interface menu, select an option:

    • New Subnet: Enter a subnet in the IPv4 Subnet field.

    • Existing Subnet ID: Enter a subnet in the Existing Subnet ID field.

  • From the Subnet for Inside Interface menu, specify the subnet for the node on the inside interface. You can select Autogenerate Subnet for automatic assignment or Specify Subnet for a custom IPv4 subnet.

Note: New subnet creation is not supported for a brownfield deployment with an existing VPC selected for the site. In this case, you must provide an existing subnet ID.

  • Click Apply.

  • In the Site Network Firewall section:

    • Optionally, add a firewall policy by selecting Active Firewall Policies or Active Enhanced Firewall Policies from the Manage Firewall Policy menu. Select an existing firewall policy, or select Add Item to create and apply a firewall policy or Configure for an enhanced version.

Note: See the Create Firewall Policy guide for more information.

  • From the Manage Forward Proxy menu, select an option to use the site as a forward proxy for outgoing requests. This configuration allows you to filter Internet-bound outgoing traffic from the site:

    • Disable Forward Proxy if you want clients on-site to directly connect to services on the Internet. All outbound traffic is allowed.

    • Enable Forward Proxy with Allow All Policy if you want all outbound traffic to be allowed and need the CE to proxy the outbound connections and SNAT over the SLO IP.

    • Enable Forward Proxy and Manage Policies if you want the CE to proxy traffic for selected TLS domains or HTTP URLs: Select an existing forward proxy policy or select Add Item to create and apply a forward proxy policy.

Note: See the Forward Proxy Policies guide for more information.

Figure
Figure: Network Firewall Configuration for Node
  • Enable Show Advanced Fields in the Advanced Options section.

  • Select Connect Global Networks from the Select Global Networks to Connect menu, and perform the following:

    • Click Add Item.

    • From the Select Network Connection Type menu, select an option:

      • Select Direct Site Local Inside to a Global Network to allow subnets on the site’s VPC and subnets on other sites on the same global network to be routable to each other without NAT.

      • Select Direct Site Local Outside to a Global Network to allow subnets on the site’s VPC to route to subnets on other sites without NAT, but disable other sites to have a route to the current site’s subnets other than the outside subnet.

    • From the Global Virtual Network menu, select an existing global network or create a new network using Add Item. See the Virtual Networks guide for more information.

    • Click Apply.

  • From the Site Mesh Group Connection Type menu, select an option:

    • Select Site Mesh Group Connection via Public IP if other sites in SMG are accessible only over the Internet.

    • Select Site Mesh Group Connection via Private IP if other sites in SMG are accessible over private connectivity.

  • From the Select DC Cluster Group menu, select an option to set the site in a DC cluster group:

    • Not a Member of DC Cluster Group: Default option.

    • Member of DC Cluster Group via Outside Network: Select this option if other sites are reachable via SLO interface.

    • Member of DC Cluster Group via Inside Network: Select this option if other sites are reachable via SLI interface.

Note: For more information, see the Configure DC Cluster Group guide.

  • From the Manage Static Routes for Inside Network menu, select Manage Static Routes. Click Add Item in the List of Static Routes subsection. Perform one of the following steps:

    • Select Simple Static Route and then enter a static route in the Simple Static Route field. Specify the destination in a.b.c.d/m format. The route is always added on SLI, and the ARP for the destination must resolve from the CE’s SLI.

    • Select Custom Static Route and then click Configure. Perform the following steps:

      • In the Subnets section, click Add Item. Select IPv4 or IPv6 option from the Version menu. Enter a prefix and a prefix length for the subnet. Click Apply. You can use the Add Item option to set more subnets.

      • In the Nexthop section, select a next-hop type from the Type menu. Select IPv4 or IPv6 from the Version menu in the Address subsection. Enter an IP address. From the Network Interface menu, select an option.

      • From the Static Route Labels field, select supported labels using Add Label. You can select more than one from this list.

      • From the Attributes menu, select supported attributes from the Attributes menu. You can select more than one from this list.

      • Click Apply to add the custom route.

      • Click Apply.

  • Select Manage Static routes from the Manage Static Routes for Outside Network menu, and click Add Item. Follow the same procedure as that of managing the static routes for inside network.

  • From the Allowed VIP Port Configuration for Outside Network menu, configure VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site. See the following options:

    • Disable Allowed VIP Port: Ports 80 and 443 will be not allowed.

    • Allow HTTP Port: Allows only port 80.

    • Allow HTTPS Port: Allows only port 443.

    • Allow HTTP & HTTPS Port: Allows only ports 80 and 443. This is populated by default.

    • Ports Allowed on Public: Allows specifying custom ports or port ranges. Enter port or port range in the Port Ranges field.

  • In the Allowed VIP Port Configuration for Inside Network menu, configure VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site. See the following options:

    • Disable Allowed VIP Port: Ports 80 and 443 will be not allowed.

    • Allow HTTP Port: Allows only port 80.

    • Allow HTTPS Port: Allows only port 443.

    • Allow HTTP & HTTPS Port: Allows only ports 80 and 443. This is populated by default.

    • Ports Allowed on Public: Allows specifying custom ports or port ranges. Enter port or port range in the Port Ranges field.

  • From the Performance Mode menu, select an option:

    • L7 Enhanced: This option optimizes the site for Layer 7 traffic processing.

    • L3 Mode Enhanced Performance: This option optimizes the site for Layer 3 traffic processing. Only choose this option if the site is used for L3 connectivity and not any L7 features. Select whether to use this feature with or without jumbo frames.

Note: The L3 Mode Enhanced Performance feature works on CE sites with a minimum of 5 cores and a minimum of 3 GB memory.

Jumbo frames (Ethernet frames with a larger payload than the Ethernet standard maximum transmission unit of 1,500 bytes) are supported for L3 Mode Enhanced Performance.

If L3 Mode Enhanced Performance is not enabled on all CE sites in a Site Mesh Group, the MTU configured on the site-to-site tunnel interfaces will not be consistent. Therefore, it is recommended to enable L3-focused performance mode on all sites participating in a Site Mesh Group.

  • Click Apply.
App Stack Cluster (one interface)

For the App Stack Cluster (One Interface) option:

  • Click Configure.

  • In the App Stack Cluster (One Interface) Nodes in AZ section, click Add Item. Perform the following:

Note: Either a single master node site or a multi-node site with three (3) master nodes is supported. Therefore, if you are adding more than one node, ensure that there are three (3) master nodes for your site. Use Add Item to add more master nodes.

  • From the AWS AZ Name menu, select an option that matches the configured AWS Region.

  • Select New Subnet or Existing Subnet ID from the Subnet for local Interface menu.

Note: New subnet creation is not supported for a brownfield deployment with an existing VPC selected for the site. In this case, you must provide an existing subnet ID.

  • Enter a subnet address in IPv4 Subnet or subnet ID in Existing Subnet ID.

  • Click Apply.

  • In the Site Network Firewall section:

    • Optionally, add a firewall policy by selecting Active Firewall Policies or Active Enhanced Firewall Policies from the Manage Firewall Policy menu. Select an existing firewall policy, or select Add Item to create and apply a firewall policy or Configure for an enhanced version.

Note: See the Create Firewall Policy guide for more information.

  • Optionally, select Enable Forward Proxy with Allow All Policy or Enable Forward Proxy and Manage Policies from the Manage Forward Proxy menu. For the latter option, select an existing forward proxy policy, or select Add Item to create and apply a forward proxy policy.

Note: See the Forward Proxy Policies guide for more information.

  • In the Storage Configuration section, from the Select Configuration for Storage Classes menu, select Add Custom Storage Class.

  • Click Add Item.

  • In the Storage Class Name field, enter a name for the storage class as it will appear in Kubernetes.

  • Optionally, enable the Default Storage Class option to make this new storage class the default class for all clusters.

Note: By default, a site deployed in AWS supports Amazon Elastic Block Store (EBS).

  • Click Apply.

  • In the Advanced Options section, select Connect Global Networks from the Select Global Networks to Connect menu, and perform the following:

    • Click Add Item.

    • From the Select Network Connection Type menu, select an option:

      • Select Direct Site Local Inside to a Global Network to allow subnets on the site’s VPC and subnets on other sites on the same global network to be routable to each other without NAT.

      • Select Direct Site Local Outside to a Global Network to allow subnets on the site’s VPC to route to subnets on other sites without NAT, but disable other sites to have a route to the current site’s subnets other than the outside subnet.

    • From the Global Virtual Network menu, select an existing global network or create a new network using Add Item. See the Virtual Networks guide for more information.

    • Click Apply.

    • To create a new global network, click Add Item from the Global Virtual Network menu:

      • Complete the form information.

      • Click Continue.

      • Click Apply.

    • From the Manage Static Routes for Site Local Network menu, select Manage Static routes. Click Add Item and perform one of the following steps:

      • From the Static Route Config Mode menu, select Simple Static Route. Enter a static route in Simple Static Route field. Specify the destination in a.b.c.d/m format. The route is always added on SLI, and the ARP for the destination must resolve from the CE’s SLI.

      • From the Static Route Config Mode menu, select Custom Static Route. Click Configure. Perform the following steps:

        • In Subnets section, click Add Item. Select IPv4 Subnet or IPv6 Subnet from the Version menu.

        • Enter a prefix and prefix length for your subnet.

        • Click Apply.

        • Use the Add Item option to set more subnets.

      • In Nexthop section, select a next-hop type from the Type menu.

      • Select IPv4 Address or IPv6 Address from the Version menu.

      • Enter an IP address.

      • From the Network Interface menu, select a network interface or select Add Item to create and apply a new network interface.

      • From the Static Route Labels field, select supported labels using Add Label. You can select more than one from this list.

      • From the Attributes menu, select supported attributes. You can select more than one from this list.

      • Click Apply to add the custom route.

      • Click Apply.

  • From the Site Mesh Group Connection Type menu, select an option:

    • Select Site Mesh Group Connection via Public IP if other sites in SMG are accessible only over the Internet.

    • Select Site Mesh Group Connection via Private IP if other sites in SMG are accessible over private connectivity.

  • From the Select DC Cluster Group menu, select an option to set the site in a DC cluster group:

    • Not a Member of DC Cluster Group: Default option.

    • Member of DC Cluster Group: Select the DC cluster group from the Member of DC Cluster Group via Outside Network menu to connect the site using an outside network.

Note: For more information, see the Configure DC Cluster Group guide.

  • From the Allowed VIP Port Configuration menu, configure the VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site. See the following options:

    • Disable Allowed VIP Port: Ports 80 and 443 will be not allowed.

    • Allow HTTP Port: Allows only port 80.

    • Allow HTTPS Port: Allows only port 443.

    • Allow HTTP & HTTPS Port: Allows only ports 80 and 443. This is populated by default.

    • Ports Allowed on Public: Allows specifying custom ports or port ranges. Enter port or port range in the Port Ranges field.

  • From the Site Local K8s API access menu, select an option for API access:

    • Select Disable Site Local K8s API access if you do not want to access the k8s API for the site directly and instead are using it as one of the sites in virtual Kubernetes (vk8s) and want to access it via vk8s API only.

    • Select Enable Site Local K8s API access if you want to configure the site for managed K8s and want to access the site’s API endpoint directly.

    • From the Enable Site Local K8s API access menu, select an existing managed K8s cluster name or create a new one by clicking on Add Item. For instructions on K8s cluster creation, see Create K8s Cluster.

  • Click Apply.

Note: The Distributed Cloud Platform supports both mutating and validating webhooks for managed K8s. Webhook support can be enabled in the K8s configuration (Manage > Manage K8s > K8s Clusters). For more information, see Create K8s Cluster in the Advanced K8s cluster security settings section.

Step 5: Set the Internet VIP choice.
  • Use the Advertise VIPs to Internet on Site drop-down menu to create a virtual IP address (VIP). See the following options:

    • Disable VIP Advertisement to Internet on Site is selected by default. It disables public VIP creation directly on the site. Instead, the default behavior is to use REs to publish to the Internet.

    • Select Enable VIP Advertisement to Internet on Site if you want to enable creation of public VIP on the CE site for a load balancer.

Figure
Figure: Internet VIP Configuration

Note: You must enable Internet VIP on the AWS cloud site if you want clients to access the Load Balancer VIP directly from the Internet. Distributed Cloud Console will orchestrate an AWS Internet-facing NLB, causing traffic to be equally distributed to all CE nodes on the site.

You will also need to create an HTTP load balancer with a custom VIP advertisement. Use Site or Virtual Site advertising with Site Network set to either Outside Network with internet VIP or Inside and Outside Network with internet VIP. For more information, see HTTP Load Balancer.

Step 6: Set data egress gateway and security group.
  • From the Cloud Egress Gateway Selection menu, select an option to choose the type of egress gateway on the site’s VPC:

    • Select this option to route site traffic through a Internet Gateway: Default option. Creates an IGW for the site's VPC.

    • Select this option to route site traffic through a Network Address Translation (NAT) Gateway.: This option is for using a NAT gateway for egress use. Provide the NAT gateway’s ID in the Existing NAT Gateway ID field.

    • Select this option to route site traffic through a Virtual Private Gateway.: This option is for using a NAT gateway through VPN for egress use. Provide the VPN gateway’s ID in the Existing Virtual Private Gateway ID field.

  • From the Security Group menu, select the security group option to attach to the SLO/SLI network interfaces. Choose from the following:

    • Select this option to create and attach F5XC default security group: This option allows automated security group creation for SLI and SLO interfaces. The auto-created security group allows all traffic on incoming and outgoing directions, and the security is enforced on the CE’s data path.

    • Select this option to specify custom security groups for slo and sli interfaces: This option allows existing security groups for site deployment in an existing VPC. You can define security rules for the site on the cloud in addition to security enforcement on the CE’s data path. Check prerequisites for the list of ports and protocols to which CE access is required. You will need to add the Outside Security Group ID and Inside Security Group ID.

Step 7: Set the site node parameters.
  • In the Site Node Parameters section, enable the Show Advanced Fields option.

  • From the AWS Instance Type for Node menu, select the instance type according to your CPU and memory requirements using See Common Values.

  • In the Public SSH key box, enter the public key used for SSH purposes.

  • Optionally, configure the Cloud Disk Size (check the prerequisites for minimum size requirements).

  • Optionally, add a geographic address and enter the latitude and longitude values. This information is auto-populated based on the AWS region selected previously, but you have the option to override it. The coordinates allow the site to appear at a proper location on the site map, on the dashboard.

Figure
Figure: Site Node Parameters
Step 8: Configure the advanced options.
  • In the Advanced Configuration section, enable the Show Advanced Fields option.

  • From the Logs Streaming menu, select Enable Logs Streaming to configure the syslog server. Keep Disable Logs Streaming selected if streaming is not required.

  • From the F5XC Software Version menu, keep the default selection of Latest SW Version or select F5XC Software Version to specify an older version number.

  • From the Operating System Version menu, keep the default selection of Latest OS Version or select Operating System Version to specify an older version number.

  • For AWS Tags, use Add Label to link sites together using a label. A maximum of thirty (30) tags are supported per instance.

  • From the Desired Worker Nodes Selection menu, select an option:

    • For the Desired Worker Nodes Per AZ option, enter the number of worker nodes. The number of worker nodes you set here will be created per the availability zone in which you created nodes. For example, if you configure three nodes in three availability zones, and set the Desired Worker Nodes Per AZ box as 3, then 3 worker nodes per availability zone are created and the total number of worker nodes for this AWS VPC site will be 9.

    • For the Total Number of Worker Nodes for a Site option, specify the number of worker nodes if you want automation to automatically place the nodes evenly across the AZs.

    • No Worker Nodes: Default option. No worker nodes are selected.

  • To enable the offline survivability feature for your site:

    • From the Offline Survivability Mode menu, select Enable Offline Survivability Mode if the network connection is expected to have intermittent issues causing the site to be isolated from the RE and GC. This mode allows the site to remain functional for 7 days with loss of connectivity. This action will restart all pods for your site. For more information, see the Manage Site Offline Survivability guide.

Important: The Enable Offline Survivability Mode option must be enabled if the site needs to be a part of a Site Mesh Group, with both control plane and data plane mesh enabled.

Figure
Figure: Advanced Configuration
Step 8.1: Configure blocked services from site.
  • From the Services to be blocked on site menu, select the service you want to be blocked/allowed on the CE node. This configuration only blocks access to the services running on the CE nodes and not to the services to which the CE is acting as a load balancer or a default gateway. See the following options:

    • Block DNS, SSH & WebUI services on Site: Default option.

    • Allow access to DNS, SSH & WebUI services on Site: Select this option to allow incoming traffic to these services.

    • Custom Blocked Services Configuration: Select this option and click Add Item to block specific services. Select the service to block (DNS or SSH) on the SLO or SLI network from the Blocked Services Value Type menu.

Step 9: Optionally, configure private link or Direct Connect.

You can configure these options under the Advanced Configuration section.

  • In the Advanced Configuration section, enable the Show Advanced Fields option.

  • From the Private Connectivity To Site drop-down menu, select an option:

    • Disable Private Connectivity: Default option that allows a site to connect to other sites and the RE over the public Internet.

    • Enable Private Connectivity: Enables a private link to your cloud site using CloudLink. For more information, see the CloudLink guide.

    • Enable Direct Connect(Legacy): Click View Configuration to configure AWS Direct Connect for your site.

Direct Connect (Legacy)
  • To view and change the default settings:

    • Click View Configuration.

    • From the AWS Direct Connect VIF Configuration drop-down menu, select an option for the Virtual Interface (VIF):

      • Hosted VIF mode: With this mode, F5 will provision an AWS Direct Connect Gateway and a Virtual Private Gateway. The hosted VIP you provide will be automatically associated and will set up BGP peering.

      • Standard VIF mode: With this mode, F5 will provision an AWS Direct Connect Gateway and a Virtual Private Gateway, a user-associated VIP, and will set up BGP peering.

    • For the Hosted VIF mode option:

      • Click Add Item.

      • Enter a VIF ID.

      • Select the Region of the VIF.

    • Click Apply.

    • From the Site Registration & Connectivity to RE menu, select how the tunneling will traffic data between site and regional edge (RE). If you select the AWS option, provide the CloudLink ADN name.

    • From the ASN Configuration menu, select whether to assign a custom autonomous system number (ASN) or use the default option.

    • Click Apply.

Step 10: Complete the site object creation.
New VPC Site

Click Save and Exit to complete creating the site. The Status field for the site object displays Validation in progress. After validation, the field displays Validation Succeeded.

Existing VPC Site

If you used an existing VPC, Console will validate whether certain existing objects are available and valid. This provides current information to help troubleshoot and fix any potential issues without having to wait until the full site deployment process completes.

After you click Save and Exit, the validation process begins and is displayed as Validation in progress.

If the site deployment validation failed, a message with Validation Failed will be displayed. Click on the tooltip to display a popup message with the error.

If the site deployment validation succeeded, a message with Validation Succeeded will be displayed.

Note: The QUEUED state references site status action that is in process. The site status will remain in QUEUED state until the backend service is ready to execute Apply/Plan/Destroy commands. The site status (under the Status column) is updated on the Console once the execution begins. After a maximum duration of 15 minutes, the site will stay in the QUEUED state until the status times out after which a new state is set as PROVISION_TIMEOUT.


Deploy Site

Creating the AWS site VPC object in Console generates the Terraform parameters.

Step 1: Deploy site.
  • Navigate to the AWS VPC object by clicking Manage > Site Management > AWS VPC Sites.

  • Find your AWS VPC object and click Apply under the Status column. The Status column for the site object changes first to Queued and then to Applying.

AWS VPC Object Apply
Figure: AWS VPC Object Apply

Note: Optionally, you can perform Terraform plan activity before the deployment. Find your AWS VPC site object and select ... > Plan (Optional) to start the action of Terraform plan. This creates the execution plan for Terraform.

  • Wait for the status to change to Applied.
Figure
Figure: AWS VPC Object Applied
  • To check the status for the apply action, click ... > Terraform Parameters for site object, and select the Apply Status tab.

  • To debug or to run any terminal commands, use SSH to log in to the node with username cloud-user and your private key.

Note: For ingress/App Stack sites: When you update worker nodes for a site object, scaling happens automatically. For ingress/egress sites: When you update worker nodes for a site object, the Terraform Apply button is enabled. Click Apply.

Step 2: Confirm site deployed and online.
  • Navigate to Multi-Cloud Network Connect > Overview > Sites.

  • Verify status is Online. It takes a few minutes for the site to deploy and status to change to Online.

Figure
Figure: Site Status Online

Delete VPC Site

You have two options when deleting a site in Console. You delete the site entirely, with all its resources and configuration. Or you can simply delete the site, its resources, but maintain the existing configuration (so that it can be re-applied at a later time).

Note: Deleting the VPC object deletes the sites, nodes, the VPC, and other objects created in the cloud for the site. This action also removes the site object from Console and cannot be undone.

Destroying a site deployed on an existing VPC will leave the AWS subnets used for Site Local Outside, Site Local Inside, and Workload subnets without any explicit route associations.

Delete Site Completely
  • Navigate to Manage > Site Management > AWS VPC Sites.

  • Locate the site object.

  • Select ... > Delete.

  • Click Delete in pop-up confirmation window. In case the delete operation does not remove the object and returns any error, check the error from the status, fix the error, and re-attempt the delete operation. If the problem persists, contact technical support. You can check the status using the ... > Terraform Parameters > Apply status option.

Delete Site but Maintain Configuration
  • Navigate to Manage > Site Management > AWS VPC Sites.

  • Locate the site object.

  • Click Destroy for your site. Alternately, click ... > Destroy.

  • In the pop-up window, type DELETE.

  • Click Destroy to confirm the action. On successful operation, the site status will show Destroyed and the Apply button will appear on the row of your site. This can be used to create the site again at later time, if required. The site object is no longer required and can be removed from Console by clicking Delete in the Actions menu for the site.


Deploy Using Terraform

You can deploy an F5 Distributed Cloud CE site using Terraform. See Deploy AWS VPC Site with Terraform guide for detailed steps.


Next Steps

After you have successfully deployed your site, you can choose to upgrade it or create a site mesh group (SMG).

  • To update your site to the latest OS version, click Upgrade under the Software Version tile on the dashboard.

Note: Site upgrades may take up to 10 minutes per site node. Once site upgrade has completed, you must apply the Terraform parameters to site via Action menu on cloud site management page.

Figure
Figure: Site OS Upgrade

Concepts


API References