Create AWS Site
On This Page:
- Objective
- Design
- AWS VPC Site Deployment Types
- Ingress Gateway (One Interface)
- Ingress/Egress Gateway (Two Interfaces)
- App Stack Cluster (One Interface)
- Network Policies
- Forward Proxy Policy
- Prerequisites
- Deploy Using Console
- Create AWS VPC Site Object
- Deploy Site
- Delete VPC Site
- Deploy Using Vesctl
- Create AWS VPC Site
- Replace AWS VPC site
- Delete AWS VPC site
- Concepts
- API References
Objective
This guide provides instructions on how to deploy F5® Distributed Cloud Services sites with Amazon Web Services (AWS). For more information on sites, see Site.
You can deploy an AWS site in the following ways:
Note: Configuring site mesh group is not supported for the sites deployed from Console.
Using the instructions provided in this guide, you can deploy an ingress gateway site or ingress/egress gateway site. For more information, see Network Topology of a Site.
Design
Amazon Web Services (AWS) Virtual Private Cloud (VPC)
site automates the deployment of sites in AWS. As part of the AWS VPC site configuration, you can indicate that new VPC, subnets, and route tables need to be created or specify existing VPC and subnet information. In case you specify existing VPC and subnet information, creation of VPC and subnet resources are skipped.
Note: By default, a site deployed in AWS supports Amazon Elastic Block Store (EBS). See Configure Storage in Fleet.
AWS VPC Site Deployment Types
A site can be deployed in three different modes:
-
Ingress Gateway (One Interface): In this deployment mode, the site is attached to a single VPC and single Subnet. It can provide discovery of services and endpoints reachable from this subnet to any other site configured in the customer tenant.
-
Ingress/Egress Gateway (Two Interfaces): In this deployment mode, the site is attached to a single VPC with at least two interfaces on different subnets. One subnet is labeled as
Outside
, and the other asInside
. In this mode, the site provides security and connectivity for VMs and subnets via default gateway through the Site Inside interface. -
F5® Distributed Cloud App Stack Cluster (App Stack) (One Interface): The F5® Distributed Cloud Mesh (Mesh) deployment and configuration of this site is identical to Ingress Gateway (One Interface). The difference with this deployment is the Certified Hardware Type being
aws-byol-voltstack-combo
. This configures and deploys an instance type that allows the site to have Kubernetes Pods, and VMs deployed using Virtual K8s.
Ingress Gateway (One Interface)
In this deployment mode, Mesh needs one interface attached. Services running on the node connect to the internet using this interface. Also, this interface is used to discover other services and virtual machines, and expose them to other sites in the same tenant. For example, in the below figure, TCP or HTTP services on the DevOps or Dev EC2 instances can be discovered and exposed via reverse proxy remotely.
As shown in the below figure, the interface is on the Outside subnet which is associated with the VPC main routing table whose default route is pointing to the internet gateway. That is how traffic coming from the outside interface can reach the internet, along with other subnets associated with this routing-table object. In case of other subnets (for example, Dev and DevOps), these are associated with the VPC main routing table. This means that any newly created subnet in this VPC is automatically associated with this routing table.
Ingress/Egress Gateway (Two Interfaces)
In this deployment scenario the Mesh nodes need two interfaces attached. The first interface is the outside interface through which services running on the node can connect to the internet. The second interface is the inside interface which will become the default gateway IP address for all the application workloads and services present in the private subnets.
As shown in the below figure, the outside interface is on the outside subnet which is associated with the outside subnet route table whose default route is pointing to the internet gateway. That is how traffic coming from the outside interface can reach the internet. In case of inside subnets these are associated with the inside subnet route table which is also the main route table for this VPC. This means that any newly created subnet in this VPC is automatically associated with the inside subnet route table. This private subnet route table has a default route pointing to the inside IP address of the Mesh node (192.168.32.186).
Once the Mesh site comes online, the inside network of the node will be connected to the outside network through a forward proxy and SNAT enabled on the outside interface. Such that all traffic coming on the inside interface will be forwarded to the internet over the forward proxy and SNAT happening on the outside interface. Now all the workloads on private subnets can reach the internet through Mesh site.
App Stack Cluster (One Interface)
This scenario is identical to Ingress Gateway (One Interface) in terms of how the site networking and forwarding/security is configured. In addition to that, the App Stack is also made available (in other words, the Distributed Application Management Platform).
In this deployment scenario, the Mesh needs one interface attached. Services running on the node connect to the internet using this interface. Also, this interface is used to discover other services and virtual machines, and expose them to other sites in the same tenant. For example, in the below figure, TCP or HTTP services on the DevOps or Dev EC2 instances can be discovered and exposed via reverse proxy remotely.
If configured in a vK8s cluster, applications can be deployed onto this site’s App Stack offering. The services/pods of the site's App Stack can be exposed to other services and VMs on the VPC routing table; or made available externally via EIP or the Application Delivery Network (ADN).
As shown in the below figure, the interface is on the Outside subnet which is associated with the VPC main routing table whose default route is pointing to the internet gateway. That is how traffic coming from the outside interface can reach the Internet, along with other subnets associated with this routing-table object. In case of other subnets (for example, Dev and DevOps), these are associated with the VPC main routing table. This means that any newly created subnet in this VPC is automatically associated with this routing table.
Network Policies
The site can be your ingress/egress security policy enforcement point as all the traffic coming from private subnets will flow through a Distributed Cloud Services site. Traffic that does not match the type defined in network policy is denied by default.
You can specify the endpoint or subnet using the network policy. You can define the egress policy by adding the egress rules from the point of endpoint to deny or allow specific traffic patterns based on intent. You can also add ingress rules to deny, or allow traffic coming towards the endpoint.
Forward Proxy Policy
Using a forward proxy policy, you can specify allowed/denied TLS domains or HTTP URLs. The traffic from workloads on private subnets towards the Internet via the site is allowed or denied accordingly.
More details on how to configure this is captured in the rest of this document.
Prerequisites
The following prerequisites apply:
-
A Distributed Cloud Services Account. If you do not have an account, see Create an Account.
-
An Amazon Web Services (AWS) Account. See Required Access Policies for permissions needed to deploy AWS VPC site.
-
Resources required per site: Minimum 4 vCPUs and 14 GB RAM.
-
There should be no pre-existing workload subnet association when attaching an existing VPC.
Deploy Using Console
AWS Virtual Private Cloud (VPC) site object creation and deployment includes the following:
Phase | Description |
---|---|
Create AWS VPC Object | Create the VPC object in Console using the guided wizard. |
Deploy Site | Deploy the sites configured in the VPC object using automated or assisted method. |
Create AWS VPC Site Object
Sites can be viewed and managed in multiple services: Cloud and Edge Sites
, Distributed Apps
, and Load Balancers
.
This example shows Sites
for AWS
setup in Cloud and Edge Sites
.
Step 1: Log into Console, start AWS VPC site object creation.
- Open Console and click
Cloud and Edge Sites
.
Note: Homepage is role based, and your homepage may look different due to your role customization. Select
All Services
drop-down menu to discover all options. Customize Settings:Administration
>Personal Management
>My Account
>Edit work domain & skills
button >Advanced
box > checkWork Domain
boxes >Save changes
button.
Note: Confirm
Namespace
feature is in correct namespace in upper-left corner. Not available in all services.
- Click
Manage
>Site Management
>AWS VPC Sites
.
Note: If options are not showing available, select
Show
link inAdvanced nav options visible
in bottom left corner. If needed, selectHide
to minimize options fromAdvanced nav options
mode.
- Click
Add AWS VPC Site
button.
- Enter
Name
, enterLabels
andDescription
as needed.
Step 2: Configure VPC and site settings.
- In the
Site Type Selection
section, perform the following:
Step 2.1: Set region and configure VPC.
-
Select a region from the
AWS Region
drop-down menu. -
From the
VPC
menu, select an option:-
New VPC Parameters
: TheAutogenerate VPC Name
option is selected by default. -
Existing VPC ID
: Enter existing VPC ID inExisting VPC ID
box.
-
Note: If you are using an existing VPC, enable the
enable_dns_hostnames
box in the existing VPC configuration.
- Enter the CIDR in the
Primary IPv4 CIDR block
field.
Step 2.2: Set the node configuration.
From the Select Ingress Gateway or Ingress/Egress Gateway
menu, select an option.
Configure Ingress Gateway.
For the Ingress Gateway (One Interface)
option:
-
Click
Configure
. -
Click
Add Item
. -
Select an option from the
AWS AZ Name
menu that matches the configuredAWS Region
. -
Select
New Subnet
orExisting Subnet ID
from theSubnet for local Interface
menu. -
Enter subnet address in
IPv4 Subnet
, or subnet ID inExisting Subnet ID
. -
Confirm subnet is part of the CIDR block set in the previous step.
For multi-node sites:
- Toggle
Show Advanced Fields
in theAllowed VIP Port Configuration
section and configure VIP ports.
Note: This is required for the load balancer to distribute traffic among all nodes in a multi-node site.
-
From the
Select Which Ports will be Allowed
menu, select an option:-
Allow HTTP Port
: Allows only port 80. -
Allow HTTPS Port
: Allows only port 443. -
Allow HTTP & HTTPS Port
: Allows only ports 80 and 443. This is populated by default. -
Ports Allowed on Public
: Allows specifying custom ports or port ranges. Enter port or port range in thePorts Allowed on Public
field.
-
Note: The
AWS Certified Hardware
is set toaws-byol-voltmesh
by default. You can add more than one node using theAdd item
option.
Configure Ingress/Egress Gateway.
For the Ingress/Egress Gateway (Two Interface)
option:
-
Click
Configure
to open the two-interface node configuration. -
Click
Add Item
. -
Select an option from the
AWS AZ Name
menu that matches the configuredAWS Region
. -
From the
Workload Subnet
menu, select an option:-
New Subnet
: Enter a subnet in theIPv4 Subnet
field. -
Existing Subnet ID
: Enter a subnet in theExisting Subnet ID
field.
-
Note: Workload subnet is the network where your application workloads are hosted. For successful routing toward applications running in workload subnet, an inside static route to the workload subnet CIDR needs to be added on the respective site object.
-
From the
Subnet for Outside Interface
menu, select an option:-
New Subnet
: Enter a subnet in theIPv4 Subnet
field. -
Existing Subnet ID
: Enter a subnet in theExisting Subnet ID
field.
-
-
Click
Add Item
.
Configure site firewall.
-
In the
Site Network Firewall
section, optionally selectActive Network Policies
from theManage Network Policy
menu. -
Select an existing network policy, or select
Create new network policy view
to create and apply a network policy. -
After creating the policy, click
Continue
to apply. -
From the
Manage Forward Proxy
menu, select an option:-
Disable Forward Proxy
-
Enable Forward Proxy with Allow All Policy
-
Enable Forward Proxy and Manage Policies
: Select an existing forward proxy policy, or selectCreate new forward proxy policy
to create and apply a forward proxy policy.
-
-
After creating the policy, click
Continue
to apply.
Optional configuration.
-
Enable
Show Advanced Fields
in theAdvanced Options
section. -
Select
Connect Global Networks
from theSelect Global Networks to Connect
menu, and perform the following:-
Click
Add Item
. -
From the
Select Network Connection Type
menu, select an option: -
From the
Global Virtual Network
menu, select an option. -
Click
Add Item
.
-
-
Select
Manage Static routes
from theManage Static Routes for Inside Network
menu. ClickAdd Item
in theList of Static Routes
subsection. Perform one of the following steps:-
Select
Simple Static Route
and then enter a static route in theSimple Static Route
field. -
Select
Custom Static Route
and then clickConfigure
. Perform the following steps:-
In the
Subnets
section, clickAdd Item
. Select IPv4 or IPv6 option from theVersion
menu. Enter a prefix and a prefix length for your subnet. You can use theAdd item
option to set more subnets. -
In the
Nexthop
section, select a next-hop type from theType
menu. Select IPv4 or IPv6 from theVersion
menu in theAddress
subsection. Enter an IP address. From theNetwork Interface
menu, select an option. -
In the
Static Route Labels
section, selectAdd label
and follow the prompt. -
In the
Attributes
section, select supported attributes from theAttributes
menu. You can select more than one option. -
Click
Apply
.
-
-
-
Select
Manage Static routes
from theManage Static Routes for Outside Network
menu. SelectAdd Item
. Follow the same procedure of managing the static routes for theManage Static Routes for Outside Network
menu.
For multi-node sites`:
- Set
Allowed VIP Port Configuration for Outside Network
andAllowed VIP Port Configuration for Inside Network
.
Note: This is required for the load balancer to distribute traffic among all nodes in a multi-node site.
-
In the
Allowed VIP Port Configuration for Outside Network
section, perform the following:-
Select an option from the
Select Which Ports will be Allowed
menu: -
Allow HTTP Port
: Allows only port 80. -
Allow HTTPS Port
: Allows only port 443. -
Allow HTTP & HTTPS Port
: Allows only ports 80 and 443. This is populated by default. -
Ports Allowed on Public
: Allows specifying custom ports or port ranges. Enter port or port range in thePorts Allowed on Public
field.
-
-
In the
Allowed VIP Port Configuration for Inside Network
section, perform the same procedure as that of the outside network above. -
Click
Apply
.
Note: The
AWS Certified Hardware
is set toaws-byol-multi-nic-voltmesh
by default. You can add more than one node using theAdd item
option.
App Stack Cluster (One Interface).
For the Voltstack Cluster (One Interface)
option:
-
Click
Configure
to open the configuration form. -
In the
VoltStack Cluster (One Interface) Nodes in AZ
section, clickAdd Item
. Perform the following:-
Select an option from the
AWS AZ Name
menu that matches the configuredAWS Region
. -
Select
New Subnet
orExisting Subnet ID
from theSubnet for local Interface
menu. -
Enter a subnet address in
IPv4 Subnet
or subnet ID inExisting Subnet ID
.
-
Optional Configuration.
-
In the
Site Network Firewall
section, optionally selectActive Network Policies
from theManage Network Policy
menu. Select an existing network policy, or selectCreate new network policy view
to create and apply a network policy. -
Optionally select
Enable Forward Proxy with Allow All Policy
orEnable Forward Proxy and Manage Policies
from theManage Forward Proxy
menu. For the latter option, select an existing forward proxy policy, or selectCreate new forward proxy policy
to create and apply a forward proxy policy. -
In
Site Network Firewall
section, optionally selectActive Network Policies
from theManage Network Policy
menu. -
Select an existing network policy, or select
Create new network policy
to create and apply a network policy. After creating the policy, clickContinue
to apply. -
In the
Advanced Options
section, enableShow Advanced Fields
option. -
Select
Connect Global Networks
from theSelect Global Networks to Connect
menu, and perform the following:-
Click
Add Item
. -
From the
Select Network Connection Type
menu, select a connection type. -
From the
Global Virtual Network
menu, select a global network from the list of networks displayed. -
To create a new global network, click
Create new virtual network
from theGlobal Virtual Network
:-
Complete the form information.
-
Click
Continue
. -
Click
Add Item
.
-
-
Select
Manage Static routes
from theManage Static Routes for Site Local Network
menu. -
Click
Add Item
and perform one of the following steps:-
From the
Static Route Config Mode
menu, selectSimple Static Route
. Enter a static route inSimple Static Route
field. -
From the
Static Route Config Mode
menu, selectCustom Static Route
. ClickConfigure
. Perform the following steps:-
In
Subnets
section, clickAdd Item
. SelectIPv4 Subnet
orIPv6 Subnet
from theVersion
menu. -
Enter a prefix and prefix length for your subnet.
-
Use the
Add Item
option to set more subnets.
-
-
In
Nexthop
section, select a next-hop type from theType
menu. -
Select
IPv4 Address
orIPv6 Address
from theVersion
menu. -
Enter an IP address.
-
From the
Network Interface
menu, select a network interface or selectCreate new network interface
to create and apply a new network interface. -
In the
Attributes
section, select supported attributes from theAttributes
menu. You can select more than one option. -
Click
Apply
to add the custom route. -
Click
Add Item
. -
Click
Apply
.
-
-
For multi-node sites:
- In the
Allowed VIP Port Configuration
section, configure the VIP ports.
Note: This is required for the load balancer to distribute traffic among all nodes in a multi-node site.
- From the
Site Local K8s API access
menu, select an option for API access. For instructions on K8s cluster creation, see Create K8s Cluster.
Note: The Distributed Cloud Platform supports both mutating and validating webhooks for managed K8s. Webhook support can be enabled in the K8s configuration (Manage > Manage K8s > K8s Clusters). For more information, see Create K8s Cluster in the
Advanced K8s cluster security settings
section.
- Click
Apply
.
Note: The
AWS Certified Hardware
is set toaws-byol-voltstack-combo
by default. You can add more than one node using theAdd Item
option.
Step 2.3: Set the deployment type.
From the Select Automatic or Assisted Deployment
drop-down menu, select an option:
-
Automatic Deployment
: Select an existing AWS credentials object, or selectCreate new cloud credentials
to load form. -
To create new credentials:
-
Enter
Name
,Labels
, andDescription
as needed. -
From the
Select Cloud Credential Type
menu, selectAWS Programmatic Access Credentials
. -
Enter AWS access ID in the
Access Key ID
field. -
Click
Configure
in theSecret Access Key
field. -
From the
Secret Info
menu:-
Blindfold Secret
: Enter secret in theType
box. -
Clear Secret
: Enter secret inClear Secret
box in eitherText
orbase64(binary)
formats. -
Click
Apply
.
-
-
Click
Continue
button to add the new credentials.
-
Note: Refer to the Cloud Credentials guide for more information. Ensure that the AWS credentials are applied with required access policies per the Policy Requirements document.
Step 3: Set the site node parameters.
-
In the
Site Node Parameters
section, enable theShow Advanced Fields
option. Optionally, add a geographic address and enter the latitude and longitude values. -
From the
AWS Instance Type for Node
menu, select an option. -
Enter your SSH key in the
Public SSH key
box. -
From the
Desired Worker Nodes Selection
menu, select an option:- Enter the number of worker nodes in the
Desired Worker Nodes Per AZ
field. The number of worker nodes you set here will be created per the region in which you created nodes. For example, if you configure three nodes in three regions, and set theDesired Worker Nodes Per AZ
box as 3, then 3 worker nodes per region are created and the total number of worker nodes for this AWS VPC site will be 9.
- Enter the number of worker nodes in the
Note: Enable the
Show Advanced Fields
option to set the worker node count.
Step 4: Complete the AWS VPC site object creation.
Click Save and Exit
to complete creating the AWS VPC site.
The Status
box for the VPC object displays Generated
.
Deploy Site
Creating the AWS VPC object in Console generates the terraform parameters. You can deploy the site using automatic or assisted deployment, depending on your AWS VPC site object configuration.
Automatic Deployment:
Perform this procedure if you created the VPC object with automatic deployment option.
-
Navigate to the created AWS VPC object using the
Manage
>Site Management
>AWS VPC Sites
option. -
Find your AWS VPC object and click
Apply
in theActions
column.
The Status
field for the AWS VPC object changes to Apply Planning
.
Note: Optionally, you can perform terraform plan activity before the deployment. Find your AWS VPC site object and select
...
>Plan (Optional)
to start the action of terraform plan. This creates the execution plan for terraform.
-
Wait for the apply process to complete and the status to change to
Applied
. -
To check the status for the apply action, click
...
>Terraform Parameters
for your AWS VPC site object, and select theApply Status
tab. -
To locate your site, click
Sites
>Sites List
. -
Verify status is
ONLINE
. It takes a few minutes for the site to deploy and status to change toONLINE
.
Assisted Deployment:
Perform this procedure if you created the VPC site object with assisted deployment option.
-
Download the terraform variables in case of assisted deployment.
-
Navigate to the created AWS VPC site object using the
Manage
>Site Management
>AWS VPC Sites
path. -
Find your AWS VPC site object and select
...
>Terraform Parameters
for it. -
Copy the parameters to a file on your local machine.
-
Download the
volt-terraform
container.
docker pull gcr.io/volterraio/volt-terraform
- Run the terraform container.
docker run --entrypoint tail --name terraform-cli -d -it \
-w /terraform/templates \
-v ${HOME}/.ssh:/root/.ssh \
gcr.io/volterraio/volt-terraform:latest \
-f /dev/null
- Copy the downloaded terraform variables file to the container.
The following example copies to the /var/tmp
folder on the container:
docker cp /Users/ted/Downloads/system-aws-vpc-a.json terraform-cli:/var/tmp
- Download API certificate from Console, and copy it to the container.
docker cp /Users/ted/Downloads/playground.console.api-creds.p12 terraform-cli:/var/tmp
Note: See the Generate API Certificate for information on API credentials.
- Enter the terraform container.
docker exec -it terraform-cli sh
- Configure AWS API access and secret key.
aws configure
Note: For more information, refer to AWS documentation.
- Change to the VPC template directory.
cd /terraform/templates/views/assisted/aws-volt-node
-
Set the following environment variables required for the Distributed Cloud Services provider:
VOLT_API_P12_FILE
: This variable is for the path to the API certificate file.VES_P12_PASSWORD
: This variable is for the API credentials password. This is the password which you set while downloading the API certificate.VOLT_API_URL
: This variable is for the tenant URL.
The following is a sample. Change the values per your setup.
export VOLT_API_P12_FILE="/var/tmp/playground.console.api-creds.p12"
export VES_P12_PASSWORD=<api_cred_password>
export VOLT_API_URL="https://playground.console.ves.volterra.io/api"
export TF_VAR_akar_api_url=$VOLT_API_URL
- Deploy the nodes by executing the terraform commands.
terraform init
terraform apply -var-file=/var/tmp/system-aws-vpc-a.json
Note: The
terraform init
command downloads the terraform providers defined in the module. When theterraform apply
command is executed, it prompts you for input to proceed. Enteryes
to begin deploying the node(s) and wait for the deployment to complete.
- Navigate to
Sites
>Sites List
to locate your site and verify that the status isONLINE
. It takes a few minutes for the site to deploy and the status to change toONLINE
.
Delete VPC Site
Perform one of the following to delete the VPC site per the type of deployment:
Automatic Deployment: Delete the VPC object from Console for sites deployed using the automatic deployment method.
Delete VPC object:
-
Navigate to
Manage
>Site Management
>AWS VPC Sites
for the created AWS VPC object. -
Locate your AWS VPC object.
-
Select
...
>Delete
. -
Select
Delete
in pop-up confirmation window.
Note: Deleting the VPC object deletes the sites and nodes from the VPC and deletes the VPC. In case the delete operation does not remove the object and returns any error, check the error from the status, fix the error, and re-attempt the delete operation. If the problem persists, contact technical support. You can check the status using the
...
>Terraform Parameters
>Apply status
option.
Assisted Deployment: Delete the terraform deployment made in assisted mode and then delete the site in Console.
Step 1: Delete terraform deployment.
- Enter the terraform container.
docker exec -it terraform-cli sh
- Change to the VPC template directory.
cd /terraform/templates/views/assisted/aws-volt-node
-
Set the environment variable needed for Distributed Cloud Services provider:
VOLT_API_P12_FILE
: This variable is for the path to the API certificate file.VES_P12_PASSWORD
: This variable is for the API credentials password. This is the password which you set while downloading the API certificate.VOLT_API_URL
: This variable is for the tenant URL.
The following is a sample. Change the values per your setup.
export VOLT_API_P12_FILE="/var/tmp/playground.console.api-creds.p12"
export VES_P12_PASSWORD=<api_cred_password>
export VOLT_API_URL="https://playground.console.ves.volterra.io/api"
export TF_VAR_akar_api_url=$VOLT_API_URL
- Destroy the site objects from AWS by executing the terraform commands.
terraform init
terraform destroy -var-file=/var/tmp/system-aws-vpc-a.json
Note: When the
terraform destroy
command is executed, it prompts you for input to proceed. Enteryes
and wait for the destroy process to complete.
Step 2: Delete site from Console.
Delete VPC object:
-
Navigate to
Manage
>Site Management
>AWS VPC Sites
for the created AWS VPC object. -
Locate your AWS VPC object.
-
Select
...
>Delete
. -
Select
Delete
in pop-up confirmation window.
Deploy Using Vesctl
vesctl
is a configuration command line utility that enables you to create, debug, and diagnose Distributed Cloud Services configuration. See vesctl repository for more information.
Create AWS VPC Site
The following is a prerequisite for deploying using the vesctl site aws_vpc
command:
Create a Cloud Credential object, and use --cloud-cred
flag to refer it or set environment variable AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
so that site creation workflow creates a cloud credential object.
Note: When deleting the site, the cloud credential created through
vesctl site aws_vpc
command does not get deleted.
Ingress Gateway: Create ingress gateway site.
Single-Node Site: Create a single-node site.
- Enter the following command to create single-node site with new VPC:
vesctl site aws_vpc create --gw-type ingress_gw --name aws-nyc01 --action apply --region us-east-2 --vpc-cidr 192.168.0.0/22 --outside-subnets 192.168.0.0/24 --action apply
- Enter the following command to create single-node site with existing VPC and subnet-id:
vesctl site aws_vpc create --gw-type ingress_gw --name aws-nyc01 --action apply --region us-east-2 --vpc-id <vpc-xxxxx> --outside-subnet-ids <subnet-xxxxx> --action apply
Note: If you are using an existing VPC, enable the
enable_dns_hostnames
field in the existing VPC configuration.
Multi-Node Site: Create a multi-node site.
- Enter the following command to create a multi-node site with new VPC:
vesctl site aws_vpc create --gw-type ingress_gw --name aws-nyc01 --action apply --region us-east-2 \
--azs us-east-2a,us-east-2b,us-east-2c --vpc-cidr 192.168.0.0/22 \
--outside-subnets 192.168.0.0/25,192.168.1.0/25,192.168.2.0/25 --action apply
- Enter the following command to create a multi-node site with existing VPC and subnet-id:
vesctl site aws_vpc create --gw-type ingress_gw --name aws-nyc01 --action apply --region us-east-2 \
--azs us-east-2a,us-east-2b,us-east-2c --vpc-id <vpc-xxxxx> \
--outside-subnet-ids subnet-id1,subnet-id2,subnet-id3 --action apply
Note: If you are using an existing VPC, enable the
enable_dns_hostnames
box in the existing VPC configuration.
Ingress/Egress Gateway: Create ingress/egress gateway site.
Single-Node Site: Create a single-node site.
- Enter the following command to create a single-node ingress/egress gateway site with new VPC:
vesctl site aws_vpc create --gw-type ingress_egress_gw --name aws-nyc01 --action apply --region us-east-2 --vpc-cidr 192.168.0.0/22 --outside-subnets 192.168.0.0/24 --inside-subnets 192.168.1.0/24 --action apply
- Enter the following command to create a single-node ingress/egress gateway site with existing VPC and subnet-id:
vesctl site aws_vpc create --gw-type ingress_egress_gw --name aws-nyc01 --action apply --region us-east-2 --vpc-id <vpc-xxxxx> --outside-subnet-ids <subnet-xxxxx> --inside-subnet-ids <subnet-yyyyyy> --action apply
Note: If you are using an existing VPC, enable the
enable_dns_hostnames
box in the existing VPC configuration.
Multi-Node Site: Create a multi-node site.
- Enter the following command to create a multi-node ingress/egress gateway site with new VPC:
vesctl site aws_vpc create --gw-type ingress_egress_gw --name aws-nyc01 --action apply --region us-east-2 \
--azs us-east-2a,us-east-2b,us-east-2c --vpc-cidr 192.168.0.0/22 \
--outside-subnets 192.168.0.0/25,192.168.1.0/25,192.168.2.0/25 \
--inside-subnets 192.168.0.128/25,192.168.1.128/25,192.168.2.128/25 --action apply
- Enter the following command to create a multi-node ingress/egress gateway site with existing VPC and subnet-id:
vesctl site aws_vpc create --gw-type ingress_egress_gw --name aws-nyc01 --action apply --region us-east-2 \
--azs us-east-2a,us-east-2b,us-east-2c --vpc-id <vpc-xxxxx> \
--outside-subnet-ids subnet-id1,subnet-id2,subnet-id3 \
--inside-subnet-ids subnet-id4,subnet-id5,subnet-id6 --action apply
Note: If you are using an existing VPC, enable the
enable_dns_hostnames
box in the existing VPC configuration.
Note: Enter the
vesctl site aws_vpc create --help
command to view the command help.
Replace AWS VPC site
Replace Site: Replace the AWS VPC site.
vesctl site aws_vpc replace --name aws-nyc01 --os-version <new-version> --software-version <new-version>
Note: Enter the
vesctl site aws_vpc replace --help
command to view the command help.
Delete AWS VPC site
Delete Site: Delete the AWS VPC site.
vesctl site aws_vpc delete --name aws-nyc01
Note: Enter the
vesctl site aws_vpc delete --help
command to view the command help.