Create AWS Site

Objective

This guide provides instructions on how to create and deploy a Virtual Private Cloud (VPC) site to Amazon Web Services (AWS).

You can deploy a VPC site if the workload you want to secure, connect, or load balance is present on or will be deployed in the same VPC. You may also choose to deploy a VPC site if you want to use the F5® managed Kubernetes offering by creating an App Stack site.

If you want to secure, connect, or load balance workloads across multiple VPCs in an AWS region from a single service VPC, see Create AWS Site with TGW site.

You can deploy an AWS VPC site using one of the following methods:


Overview

F5® Distributed Cloud supports AWS VPC site creation in both greenfield and brownfield environments. In a greenfield environment, it can automate the creation of a VPC, subnets, route tables, security groups, and other networking features. In a brownfield environment, you can choose existing resources from your VPC or choose to create new resources using the site creation wizard.

Clustering of Customer Edge Nodes

The AWS VPC site can be deployed as a single node or as a three-node site. These nodes host the control processes, data plane processes, and the Internet Protocol Security (IPsec) connections to the Regional Edges (REs). For production deployments, a three-node site is recommended as it provides high availability. Additional worker nodes can also be deployed after the site is created for additional capacity for L7 features, like load balancing, web app and API protection (WAAP), and more.

Note: Worker nodes are only supported for three-node sites.

AWS VPC Site Deployment Modes

A site can be deployed in three different modes, as explained below. For a generic network topology of a site, see Network Topology of a Site.

Ingress Gateway (One Interface) This mode can provide discovery of services and endpoints reachable from this subnet and deliver it to any other site in the customer tenant. The site can provide TCP or HTTP load balancing and L7 security services for the apps that it discovers on the VPC. This mode can be used for application delivery or IP address overlap use cases. Since this mode functions only as an ingress gateway, it cannot provide L3 connectivity to networks on other sites in the tenant. Also, this mode can provide Secure Kubernetes Gateway (SKG) functionality to Kubernetes clusters deployed on the same VPC.

The CE node is attached to a single subnet called Site Local Outside (SLO), per Availability Zone (AZ), in a VPC. For a three-node cluster, the nodes are distributed across three AZs. The VPC route table associated with SLO must point the default route to the Internet Gateway (IGW). This ensures that the CE can reach the public Internet and open the IPsec connections to the REs. The default configuration is to use an Internet Gateway for this, but NAT Gateway and Virtual Private Gateway are also supported egress gateway options.

The workloads can be deployed on a private subnet with default route pointing to the NAT Gateway or on a public subnet with default route pointing to the Internet Gateway. These subnets and routes are not auto-created.

Ingress/Egress Gateway (Two Interfaces): In this mode, the CE functions as a default gateway for the VPC. It can provide egress connectivity to the public Internet and L3 connectivity from the instances and subnets on the VPC to subnets on other sites, if a global network is configured. It can also provide the load balancing, Secure Kubernetes Gateway (SKG) and other L7 security features, like ingress gateway mode.

With this mode, the CE node is attached with at least two interfaces on different subnets per Availability Zone (AZ) on a VPC. One subnet is labeled as Site Local Outside (SLO), and the other as Site Local Inside (SLI). The automation also creates a workload subnet, which can be used to deploy application instances. The VPC SLO route table points to the IGW as the default gateway (NAT gateway and Virtual Private Gateway are also supported). This provides Internet connectivity to the CE nodes. The SLI and the workload route table in the VPC points the default route to the SLI interface of the CE in each AZ.

F5® Distributed Cloud App Stack Cluster (App Stack) (One Interface): This deployment mode must be used if you want to use the site as a F5-managed Kubernetes site to host your applications. The site mode also provides a Secure Kubernetes Gateway (SKG) for the applications deployed on it by discovering the services, providing load balancing and L7 security features like WAAP, Bot Detection, DDoS protection, and more.

The deployment and configuration of this site mode is identical to Ingress Gateway (One Interface). The difference with this deployment is that the certified hardware Type is aws-byol-voltstack-combo. This configures and deploys an instance type that allows the site to have Kubernetes pods or VMs deployed on the site using Kubernetes API (via managed K8s or virtual K8s endpoints).

Private Connectivity

Private connectivity enables you to privately connect your on-premises data centers to a VPC in which the Distributed Cloud Services sites are hosted so that the traffic does not flow over public Internet. This also gives the option to configure connectivity to REs and site registration to go over the private connection.

Note: The private connectivity option is only supported for Ingress/Egress Gateway (Two Interfaces) option for a VPC site.

There are two private connectivity options available: AWS Direct Connect (Legacy) and CloudLink.

AWS Direct Connect (Legacy): Distributed Cloud Services can orchestrate AWS Direct Connect to a VPC site. The automation orchestrates the creation of the Virtual Private Gateway and Direct Connect Gateway (DCGW) in addition to the regular site creation. The prerequisite is that the Direct Connect connection is created and managed by the user.

The on-premises data center routes are advertised by the on-premises routers connected to AWS routers via Direct Connect. These routes are propagated to the VGW by the Direct Connect Gateway (DCGW). VGW configures these routes on the VPC route table from where it is learned on the inside network of the site.

There are two supported modes of Direct Connect private Virtual Interface (VIF):

  • Standard VIF: In this mode, the whole Direct Connect connection is used for the site. After site creation, you must associate the VIF on the DCGW using AWS Console and configure BGP peering.

  • Hosted VIF: In this mode, site orchestration accepts the configured list of VIFs delegated from the Direct Connect connection owner account to the hosted VIF acceptor account. You can set a list of VIF IDs to be accepted. The site orchestration automates the association of the VIFs to the DCGW.

CloudLink A CloudLink allows Distributed Cloud Services to orchestrate an already provisioned direct connection, establish a multi-cloud networking fabric, and then connect, deliver, secure, and operate networks and apps across hybrid environments. For more information, see CloudLink.


Site Status Descriptions

PLANNING: Site resources are being planned for creation.

PLAN_INIT_ERRORED: Planning of site resources failed at init stage.

PLAN_ERRORED: Planning of site failed with errors.

PLAN_QUEUED: Planning of site resources queued to be implemented.

APPLIED: Site resources are created, and site is waiting to come online.

APPLY_ERRORED: Creation of site resources failed with errors.

APPLY_INIT_ERRORED: Creation of site resources failed with errors at initial stage.

APPLYING: Site creation is in progress.

APPLY_PLANNING: Site resources are being planned.

APPLY_PLAN_ERRORED: Planning of site failed with errors.

APPLY_QUEUED: Creation of site resources queued to be implemented.

DESTROYED: Site resources are destroyed and site is OFFLINE.

DESTROY_ERRORED: Destroying of site resources failed with errors.

DESTROYING: Destroying of site resources in progress.

DESTROY_QUEUED: Destroying of site resources queued to be destroyed.

GENERATED: Site Object created in F5DC database as per configuration.

TIMED_OUT: Creation/Destroying of site resources is failed with a timeout.

ERRORED: Creation/Destroying of site resources is failed with errors.

PROVISIONING: Site resources are created and waiting for site to come online.


Prerequisites

The following prerequisites apply:

General

  • A Distributed Cloud Services Account. If you do not have an account, see Create an Account.

  • An AWS Account. See Required Access Policies for permissions needed to deploy site. To create a cloud credentials object, see Cloud Credentials.

  • Resources required per node: Minimum 4 vCPUs, 14 GB RAM, and 80 GB disk.

  • Instance type with Intel x86 based processor. ARM and Mac instances are not supported. Recommended instance types are:

    • t3.xlarge (4 vCPU, 16 GB RAM)

    • t3.2xlarge (8 vCPU, 32 GB RAM)

    • m5.4xlarge (16 vCPU, 64 GB RAM)

  • Internet Control Message Protocol (ICMP) needs to be opened between the CE nodes on the Site Local Outside (SLO) interfaces. This is needed to ensure intra-cluster communication checks.

Existing VPC

VPC ID, subnet IDs and AZ to be used for the deployment.

The existing subnets selected for Site Local Outside, Site Local Inside, and Workload subnets must not have explicit association with any route tables. New route tables will be created and associated with these subnets. The deployment will fail if the subnets have existing custom route table associations.

Manually Created Site

The configurations below are created by Distributed Cloud Services automation. But for manually created sites using Terraform, you must address the following conditions for the site to get deployed correctly:

  • Security group on the SLO interface must allow outgoing traffic to the internet.

  • UDP port 6080 needs to be opened between all the nodes of the site for inter-node tunnel. See the Firewall or Proxy Reference for Network Cloud guide for the complete list of IPs, ports, and URLs that a CE needs to connect with.

Private Connectivity

  • Direct Connect connection.

  • VIF IDs if you are using hosted VIF mode.


Deploy Using Console

The following video shows the AWS VPC site object creation and deployment workflow using Console:

You can create and manage an AWS VPC site in Console by first creating the site object using the guided wizard and then deploying it using the automated method.

Create AWS VPC Site Object

The wizard to create the AWS VPC site object guides you through the steps for required configuration.

Step 1: Start site object creation.
  • Log into Console.

  • From the Console homepage, select Multi-Cloud Network Connect.

Figure
Figure: Console Homepage
  • Click Manage > Site Management > AWS VPC Sites.

  • Click Add AWS VPC Site.

  • Enter Name, enter Labels, and Description as needed.
Figure
Figure: AWS Site Set Up
Step 2: Select cloud credentials.

Refer to the Cloud Credentials guide for more information. Ensure that the AWS credentials are applied with required access policies per the Policy Requirements document.

  • From the Cloud Credentials menu in the Site Type Selection section, select an existing AWS credentials object, or click Add Item to load form.
Figure
Figure: Deployment Configuration
  • To create new credentials:

    • Enter Name, Labels, and Description as needed.

    • From the Select Cloud Credential Type menu, select AWS Programmatic Access Credentials.

    • Enter AWS access ID in the Access Key ID field.

    • Click Configure in the Secret Access Key field.

    • From the Secret Type menu:

      • Blindfold Secret: Enter secret in the Type box.

      • Clear Secret: Enter secret in Clear Secret box in either Text or Base64 formats.

      • Click Apply.

    • Click Continue to add the new credentials.

Step 3: Configure AWS region and VPC parameters.
  • From the AWS Region drop-down menu, select a region.

  • From the VPC menu, select an option:

    • New VPC Parameters: The Autogenerate VPC Name option is selected by default.

    • Existing VPC ID: Enter existing VPC ID in Existing VPC ID box. If you are using an existing VPC, ensure that you enable the Enable DNS hostnames checkbox in AWS Management Console (under Edit VPC settings).

Note: If you are deploying a new AWS VPC site into an existing VPC, the deployment will fail if the AWS subnet has the hostname type set to the resource name. In AWS Management Console, ensure that the hostname is in ip-* format.

  • In the Primary IPv4 CIDR block field, enter the CIDR subnet with slash notation.
Figure
Figure: VPC and Node Type Configuration
Step 4: Set and configure VPC interface.
  • From the Select Ingress Gateway or Ingress/Egress Gateway menu, select an option:

    • Ingress Gateway (One Interface)

    • Ingress/Egress Gateway (Two Interface)

    • App Stack Cluster (One Interface)

Ingress Gateway (one interface)

For the Ingress Gateway (One Interface) option:

  • Click Configure.

  • Click Add Item.

Note: Either a single master node site or a multi-node site with three (3) master nodes is supported. Therefore, if you are adding more than one node, ensure that there are three (3) master nodes for your site. Use Add Item to add more master nodes.

  • From the AWS AZ Name menu, click See Suggestions to select an option that matches the configured AWS Region.

  • From the Subnet for local Interface menu, select New Subnet or Existing Subnet ID.

Note: New subnet creation is not supported for a brownfield deployment with an existing VPC selected for the site.

  • Enter subnet address in IPv4 Subnet, or subnet ID in Existing Subnet ID.

  • Confirm subnet is part of the CIDR block set in the previous step.

  • Click Apply.

  • From the Allowed VIP Port Configuration menu, configure VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site.

  • In the Advanced Options section, enable the Show Advanced Fields option.

  • From the Performance Mode menu, select an option:

    • L7 Enhanced: This option optimizes the site for Layer 7 traffic processing.

    • L3 Mode Enhanced Performance: This option optimizes the site for Layer 3 traffic processing. Only choose this option if the site is used for L3 connectivity and not any L7 features.

Note: The L3 Mode Enhanced Performance feature works on CE sites with a minimum of 5 cores and a minimum of 3 GB memory.

Note: Jumbo frames (Ethernet frames with a larger payload than the Ethernet standard maximum transmission unit of 1,500 bytes) are supported for L3 Mode Enhanced Performance.

  • Click Apply.
Ingress/Egress Gateway (two interfaces)

For the Ingress/Egress Gateway (Two Interface) option:

  • Click Configure.

  • Click Add Item.

Note: Either a single master node site or a multi-node site with three (3) master nodes is supported. Therefore, if you are adding more than one node, ensure that there are three (3) master nodes for your site. Use Add Item to add more master nodes.

  • From the AWS AZ Name menu, select an option that matches the configured AWS Region.

  • From the Workload Subnet menu, select an option:

    • New Subnet: Enter a subnet in the IPv4 Subnet field.

    • Existing Subnet ID: Enter a subnet in the Existing Subnet ID field.

Note: Workload subnet is the network where your application workloads are hosted. For successful routing toward applications running in workload subnet, an inside static route to the workload subnet CIDR needs to be added on the respective site object.

  • From the Subnet for Outside Interface menu, select an option:

    • New Subnet: Enter a subnet in the IPv4 Subnet field.

    • Existing Subnet ID: Enter a subnet in the Existing Subnet ID field.

  • From the Subnet for Inside Interface menu, specify the subnet for the node on the inside interface. You can select Autogenerate Subnet for automatic assignment or Specify Subnet for a custom IPv4 subnet.

Note: New subnet creation is not supported for a brownfield deployment with an existing VPC selected for the site.

  • Click Apply.

  • In the Site Network Firewall section:

    • Optionally, add a firewall policy by selecting Active Firewall Policies or Active Enhanced Firewall Policies from the Manage Firewall Policy menu. Select an existing firewall policy, or select Add Item to create and apply a firewall policy or Configure for an enhanced version.
  • From the Manage Forward Proxy menu, select an option:

    • Disable Forward Proxy

    • Enable Forward Proxy with Allow All Policy

    • Enable Forward Proxy and Manage Policies: Select an existing forward proxy policy, or select Add Item to create and apply a forward proxy policy. After creating the policy, click Apply.

Figure
Figure: Network Firewall Configuration for Node
  • Enable Show Advanced Fields in the Advanced Options section.

  • Select Connect Global Networks from the Select Global Networks to Connect menu, and perform the following:

    • Click Add Item.

    • From the Select Network Connection Type menu, select an option.

    • From the Global Virtual Network menu, select an option.

    • Click Apply.

  • From the Select DC Cluster Group menu, select an option to set the site in a DC cluster group:

    • Not a Member of DC Cluster Group: Default option.

    • Member of DC Cluster Group via Outside Network: Select the DC cluster group from the Member of DC Cluster Group via Outside Network menu to connect the site using an outside network.

    • Member of DC Cluster Group via Inside Network: Select the DC cluster group from the Member of DC Cluster Group via Inside Network menu to connect the site using an inside network.

Note: For more information, see the Configure DC Cluster Group guide.

  • From the Manage Static Routes for Inside Network menu, select Manage Static Routes. Click Add Item in the List of Static Routes subsection. Perform one of the following steps:

    • Select Simple Static Route and then enter a static route in the Simple Static Route field. Specify the destination in a.b.c.d/m format. The route is always added on SLI, and the ARP for the destination must resolve from the CE’s SLI.

    • Select Custom Static Route and then click Configure. Perform the following steps:

      • In the Subnets section, click Add Item. Select IPv4 or IPv6 option from the Version menu. Enter a prefix and a prefix length for the subnet. Click Apply. You can use the Add Item option to set more subnets.

      • In the Nexthop section, select a next-hop type from the Type menu. Select IPv4 or IPv6 from the Version menu in the Address subsection. Enter an IP address. From the Network Interface menu, select an option.

      • From the Static Route Labels field, select supported labels using Add Label. You can select more than one from this list.

      • From the Attributes menu, select supported attributes from the Attributes menu. You can select more than one from this list.

      • Click Apply to add the custom route.

      • Click Apply.

  • Select Manage Static routes from the Manage Static Routes for Outside Network menu, and click Add Item. Follow the same procedure as that of managing the static routes for inside network.

  • From the Allowed VIP Port Configuration for Outside Network menu, configure VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site.

  • In the Allowed VIP Port Configuration for Inside Network menu, configure VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site.

  • From the Performance Mode menu, select an option:

    • L7 Enhanced: This option optimizes the site for Layer 7 traffic processing.

    • L3 Mode Enhanced Performance: This option optimizes the site for Layer 3 traffic processing. Only choose this option if the site is used for L3 connectivity and not any L7 features.

Note: The L3 Mode Enhanced Performance feature works on CE sites with a minimum of 5 cores and a minimum of 3 GB memory.

Note: Jumbo frames (Ethernet frames with a larger payload than the Ethernet standard maximum transmission unit of 1,500 bytes) are supported for L3 Mode Enhanced Performance.

Note: Jumbo frames (Ethernet frames with a larger payload than the Ethernet standard maximum transmission unit of 1,500 bytes) are supported for L3 Mode Enhanced Performance.

  • Click Apply.
App Stack Cluster (one interface)

For the App Stack Cluster (One Interface) option:

  • Click Configure.

  • In the App Stack Cluster (One Interface) Nodes in AZ section, click Add Item. Perform the following:

Note: Either a single master node site or a multi-node site with three (3) master nodes is supported. Therefore, if you are adding more than one node, ensure that there are three (3) master nodes for your site. Use Add Item to add more master nodes.

  • From the AWS AZ Name menu, select an option that matches the configured AWS Region.

  • Select New Subnet or Existing Subnet ID from the Subnet for local Interface menu.

Note: New subnet creation is not supported for a brownfield deployment with an existing VPC selected for the site.

  • Enter a subnet address in IPv4 Subnet or subnet ID in Existing Subnet ID.

  • Click Apply.

  • In the Site Network Firewall section:

    • Optionally, add a firewall policy by selecting Active Firewall Policies or Active Enhanced Firewall Policies from the Manage Firewall Policy menu. Select an existing firewall policy, or select Add Item to create and apply a firewall policy or Configure for an enhanced version.

    • Optionally, select Enable Forward Proxy with Allow All Policy or Enable Forward Proxy and Manage Policies from the Manage Forward Proxy menu. For the latter option, select an existing forward proxy policy, or select Add Item to create and apply a forward proxy policy.

  • In the Storage Configuration section, from the Select Configuration for Storage Classes menu, select Add Custom Storage Class.

  • Click Add Item.

  • In the Storage Class Name field, enter a name for the storage class as it will appear in Kubernetes.

  • Optionally, enable the Default Storage Class option to make this new storage class the default class for all clusters.

Note: By default, a site deployed in AWS supports Amazon Elastic Block Store (EBS).

  • Click Apply.

  • In the Advanced Options section, select Connect Global Networks from the Select Global Networks to Connect menu, and perform the following:

    • Click Add Item.

    • From the Select Network Connection Type menu, select an option.

    • From the Global Virtual Network menu, select an option.

    • Click Apply.

    • To create a new global network, click Add Item from the Global Virtual Network menu:

      • Complete the form information.

      • Click Continue.

      • Click Apply.

    • From the Manage Static Routes for Site Local Network menu, select Manage Static routes. Click Add Item and perform one of the following steps:

      • From the Static Route Config Mode menu, select Simple Static Route. Enter a static route in Simple Static Route field. Specify the destination in a.b.c.d/m format. The route is always added on SLI, and the ARP for the destination must resolve from the CE’s SLI.

      • From the Static Route Config Mode menu, select Custom Static Route. Click Configure. Perform the following steps:

        • In Subnets section, click Add Item. Select IPv4 Subnet or IPv6 Subnet from the Version menu.

        • Enter a prefix and prefix length for your subnet.

        • Click Apply.

        • Use the Add Item option to set more subnets.

      • In Nexthop section, select a next-hop type from the Type menu.

      • Select IPv4 Address or IPv6 Address from the Version menu.

      • Enter an IP address.

      • From the Network Interface menu, select a network interface or select Add Item to create and apply a new network interface.

      • From the Static Route Labels field, select supported labels using Add Label. You can select more than one from this list.

      • From the Attributes menu, select supported attributes. You can select more than one from this list.

      • Click Apply to add the custom route.

      • Click Apply.

  • From the Select DC Cluster Group menu, select an option to set the site in a DC cluster group:

    • Not a Member of DC Cluster Group: Default option.

    • Member of DC Cluster Group: Select the DC cluster group from the Member of DC Cluster Group via Outside Network menu to connect the site using an outside network.

Note: For more information, see the Configure DC Cluster Group guide.

  • From the Allowed VIP Port Configuration menu, configure the VIP ports for the load balancer to distribute traffic among all nodes in a multi-node site.

  • From the Site Local K8s API access menu, select an option for API access. For instructions on K8s cluster creation, see Create K8s Cluster.

  • Click Apply.

Note: The Distributed Cloud Platform supports both mutating and validating webhooks for managed K8s. Webhook support can be enabled in the K8s configuration (Manage > Manage K8s > K8s Clusters). For more information, see Create K8s Cluster in the Advanced K8s cluster security settings section.

Step 5: Set the Internet VIP choice.
  • Use the Advertise VIPs to Internet on Site drop-down menu to Enable VIP Advertisement to Internet on Site (create an Internet VIP).
Figure
Figure: Internet VIP Configuration

Note: You must enable Internet VIP on the AWS cloud site if you want clients to access the Load Balancer VIP directly from the Internet. Distributed Cloud Console will orchestrate an AWS Internet-facing NLB, causing traffic to be equally distributed to all CE nodes on the site.

You will also need to create an HTTP load balancer with a custom VIP advertisement. Use Site or Virtual Site advertising with Site Network set to either Outside Network with internet VIP or Inside and Outside Network with internet VIP. For more information, see HTTP Load Balancer.

Step 6: Set data egress gateway and security group.
  • From the Cloud Egress Gateway Selection menu, select an option to route the site node's traffic to the public Internet.

  • From the Security Group menu, select the security group option to attach to the SLO/SLI network interfaces.

Note: The auto-created security group allows all traffic on incoming and outgoing directions, and the security is enforced on the CE’s data path. The custom option is used on existing security groups for site deployment in an existing VPC. This enables you to define security rules for the site in the cloud in addition to security enforcement on the CE’s data path.

Step 7: Set the site node parameters.
  • In the Site Node Parameters section, enable the Show Advanced Fields option.

  • From the AWS Instance Type for Node menu, select the instance type using See Common Values.

  • In the Public SSH key box, enter the public key used for SSH purposes.

  • Optionally, add a geographic address and enter the latitude and longitude values. This information is autopopulated based on the AWS region selected previously, but you have the option to override it. The coordinates allow the site to appear at a proper location on the site map, on the dashboard.

Figure
Figure: Site Node Parameters
Step 8: Configure the advanced options.
  • In the Advanced Configuration section, enable the Show Advanced Fields option.

  • From the Logs Streaming menu, select an option. If you select Enable Logs Streaming, you must select a log receiver or create a new receiver with Add Item.

  • From the F5XC Software Version menu, select an option. If you select F5XC Software Version, you must enter a version to use.

  • From the Operating System Version menu, select an option. If you select Operating System Version, you must enter an OS version to use.

  • For AWS Tags, use Add Label to link sites together using a label. A maximum of thirty (30) tags are supported per instance.

  • From the Desired Worker Nodes Selection menu, select an option:

    • For the Desired Worker Nodes Per AZ option, enter the number of worker nodes. The number of worker nodes you set here will be created per the availability zone in which you created nodes. For example, if you configure three nodes in three availability zones, and set the Desired Worker Nodes Per AZ box as 3, then 3 worker nodes per availability zone are created and the total number of worker nodes for this AWS VPC site will be 9.
  • To enable the offline survivability feature for your site:

    • From the Offline Survivability Mode menu, select Enable Offline Survivability Mode. This action will restart all pods for your site. For more information, see the Manage Site Offline Survivability guide.

Note: The Offline Survivability Mode must be turned on if the site is part of a site mesh group (SMG), with both control plane and data plane mesh enabled.

Figure
Figure: Advanced Configuration
Step 8.1: Configure blocked services from site.
  • From the Services to be blocked on site menu, select Custom Blocked Services Configuration. If you select Allow access to DNS, SSH services on Site, no further configuration is needed.

  • Click Add Item.

Figure
Figure: Add Blocked Service
  • From the Blocked Services Value Type menu, select the service to block:

    • DNS port

    • SSH port

  • From the Network Type menu, select the type of network in which this service is blocked from your site.

  • Click Apply.

Figure
Figure: Select Blocked Service
Step 9: Optionally, configure private link or Direct Connect.

You can configure these options under the Advanced Configuration section.

  • In the Advanced Configuration section, enable the Show Advanced Fields option.

  • From the Private Connectivity To Site drop-down menu, select an option:

    • Disable Private Connectivity: Default option.

    • Enable Private Connectivity: Enables a private link to your cloud site. For more information, see the CloudLink guide.

    • Enable Direct Connect(Legacy): Click View Configuration to configure for your site.

Direct Connect (Legacy)
  • Click Configure.

  • From the VIF Configuration drop-down menu, select an option for the Virtual Interface (VIF):

    • Hosted VIF mode: With this mode, F5 will provision an AWS Direct Connect Gateway and a Virtual Private Gateway. The hosted VIP you provide will be automatically associated and will set up BGP peering.

    • Standard VIF mode: With this mode, F5 will provision an AWS Direct Connect Gateway and a Virtual Private Gateway, a user associate VIP, and will set up BGP peering.

  • For the Hosted VIF mode option:

    • Click Add item.

    • Enter a list of VIF IDs.

    • Click Apply.

  • For the Standard VIF mode option:

    • Click Apply.
Step 10: Complete the site object creation.
New VPC Site

Click Save and Exit to complete creating the site. The Status field for the site object displays Validation in progress. After validation, the field displays Validation Succeeded.

Existing VPC Site

If you used an existing VPC, Console will validate whether certain existing objects are available and valid. This provides current information to help troubleshoot and fix any potential issues without having to wait until the full site deployment process completes.

After you click Save and Exit, the validation process begins and is displayed as Validation in progress.

If the site deployment validation failed, a message with Validation Failed will be displayed. Click on the tooltip to display a popup message with the error.

If the site deployment validation succeeded, a message with Validation Succeeded will be displayed.

Note: The QUEUED state references site status action that is in process. The site status will remain in QUEUED state until the backend service is ready to execute Apply/Plan/Destroy commands. The site status (under the Status column) is updated on the Console once the execution begins. After a maximum duration of 15 minutes, the site will stay in the QUEUED state until the status times out after which a new state is set as PROVISION_TIMEOUT.


Deploy Site

Creating the AWS site VPC object in Console generates the Terraform parameters.

Note: Site upgrades may take up to 10 minutes per site node. Once site upgrade has completed, you must apply the Terraform parameters to site via Action menu on cloud site management page.

Step 1: Deploy site.
  • Navigate to the AWS VPC object by clicking Manage > Site Management > AWS VPC Sites.

  • Find your AWS VPC object and click Apply under the Status column. The Status column for the site object changes first to Queued and then to Applying.

AWS VPC Object Apply
Figure: AWS VPC Object Apply

Note: Optionally, you can perform Terraform plan activity before the deployment. Find your AWS VPC site object and select ... > Plan (Optional) to start the action of Terraform plan. This creates the execution plan for Terraform.

  • Wait for the status to change to Applied.
Figure
Figure: AWS VPC Object Applied
  • To check the status for the apply action, click ... > Terraform Parameters for site object, and select the Apply Status tab.

  • To debug or to run any terminal commands, use SSH to log in to the node with username cloud-user and your private key.

Note: For ingress/App Stack sites: When you update worker nodes for a site object, scaling happens automatically. For ingress/egress sites: When you update worker nodes for a site object, the Terraform Apply button is enabled. Click Apply.

Step 2: Confirm site deployed and online.
  • Navigate to Multi-Cloud Network Connect > Overview > Sites.

  • Verify status is Online. It takes a few minutes for the site to deploy and status to change to Online.

Figure
Figure: Site Status Online

Delete VPC Site

You have two options when deleting a site in Console. You delete the site entirely, with all its resources and configuration. Or you can simply delete the site, its resources, but maintain the existing configuration (so that it can be re-applied at a later time).

Note: Deleting the VPC object deletes the sites, nodes, the VPC, and other objects created in the cloud for the site. This action also removes the site object from Console and cannot be undone.

Destroying a site deployed on an existing VPC will leave the AWS subnets used for Site Local Outside, Site Local Inside, and Workload subnets without any explicit route associations.

Delete Site Completely
  • Navigate to Manage > Site Management > AWS VPC Sites.

  • Locate the site object.

  • Select ... > Delete.

  • Click Delete in pop-up confirmation window. In case the delete operation does not remove the object and returns any error, check the error from the status, fix the error, and re-attempt the delete operation. If the problem persists, contact technical support. You can check the status using the ... > Terraform Parameters > Apply status option.

Delete Site but Maintain Configuration
  • Navigate to Manage > Site Management > AWS VPC Sites.

  • Locate the site object.

  • Click Destroy for your site. Alternately, click ... > Destroy.

  • In the pop-up window, type DELETE.

  • Click Destroy to confirm the action. On successful operation, the site status will show Destroyed and the Apply button will appear on the row of your site. This can be used to create the site again at later time, if required. The site object is no longer required and can be removed from Console by clicking Delete in the Actions menu for the site.


Deploy Using Terraform

You can deploy an F5 Distributed Cloud CE site using Terraform. See Deploy AWS VPC Site with Terraform guide for detailed steps.


Next Steps

After you have successfully deployed your site, you can choose to upgrade it or create a site mesh group (SMG).

  • To update your site to the latest OS version, click Upgrade under the Software Version tile on the dashboard.
Figure
Figure: Site OS Upgrade

Concepts


API References