Create App Stack Site
Objective
This document provides instructions on how to install F5® Distributed Cloud single-node or multi-node F5 Distributed Cloud App Stack sites on clusters of private Data Center (DC) devices. An App Stack site is a Distributed Cloud Customer Edge (CE) site engineered specifically for the purpose of managing DC clusters. To know more about Distributed Cloud sites, see Site. The F5 Distributed Cloud Platform also supports creating physical K8s (also called managed K8s) clusters for managing your applications on the created App Stack sites. See Create and Deploy Managed K8s for instruction on creating the managed K8s clusters.
Clusters of private DCs that are set up on geographically separated locations require an established direct communication link with each member of the cluster. An App Stack site helps achieve that while ensuring safety and reliability. You can define a DC cluster group in an App Stack site to which member nodes can be added dynamically. Such added members become part of the cluster group and automatically get full mesh connectivity with all other members of the group.
Using the instructions provided in this document, you can deploy a single-node or multi-node App Stack site on your private DC, define a cluster group, and add members to the cluster group. You can also enable the local API access for the managed K8s running on the sites so that it can be used like regular K8s.
App Stack Site vs Other CE Sites
The App Stack site differs from the regular Distributed Cloud CE site that can be deployed with App Stack. An App Stack site simplifies the task of managing distributed apps across DC clusters while offering the option to establish full mesh connectivity among themselves. In the case of regular sites with App Stack functionality, you will need to explicitly create and manage site mesh groups for that purpose. With an App Stack site, you just need to add the site to the DC cluster group and connectivity is automatically established.
Also, an App Stack site provides support to manage local K8s while controlling communication between services of different namespaces. While regular Distributed Cloud sites provide virtual K8s (vK8s), the vK8s is per namespace per site, and managed local K8s is deployed across all namespaces of an App Stack site with a single kubeconfig. Therefore, if you require the ability to manage distributed apps across your private DC clusters with full mesh connectivity, an App Stack site with managed K8s is useful. For other requirements, you can use regular CE sites.
Reference Architecture
This reference architecture shows the recommended practice for setting up geographically distributed DC clusters using App Stack sites, privately connecting them using DC cluster group, and enabling managed K8s for deploying apps in those clusters.
The following image shows a sample reference architecture of the App Stack DC cluster site.
Figure: App Stack DC Cluster Site Reference Architecture
The DC clusters represent App Stack sites. These consist of interfaces towards storage network, interfaces in LACP towards DC switches, and dedicated or non-dedicated management interfaces. An App Stack site supports storage classes with dynamic PVCs. Bonded interfaces are also supported with bond interfaces for regular use in the case of a dedicated management interface. In the case of non-dedicated management interfaces, the fallback interface for internet connectivity in a bonded interface which must be assigned with a static IP address. The example image also shows TOR switches that can be connected with a single bond interface using MC-LAG or with 2 bond interfaces at L3 level. An App Stack site also supports establishing BGP peering with the TORs.
Bonding of interfaces is supported in LACP active/active or active/backup modes.
The following image shows sample reference deployment of an App Stack DC cluster sites that are deployed at different geographies.
Figure: App Stack DC Cluster Site Reference Deployment
The apps deployed in the managed K8s clusters in these sites can directly communicate with each other in CI/CD use cases or users can obtain access to managed K8s API using the Kubeconfig.
The following image shows service interaction for service deployed in different DC cluster sites:
Figure: Service Communication Between Clusters of Different DCs
The following image shows service interaction for service deployed in the same DC cluster site:
Figure: Service Communication Between Clusters of Same DC
The following image shows service deployment in active/active and active/standby mode for remote clients:
Figure: Service Deployment in Active-Active and Active-Standby Modes
Note: For instruction on accessing managed K8s clusters and deploying apps, see Create and Deploy Managed K8s guide.
Prerequisites
-
An F5 Distributed Cloud Account. If you do not have an account, see Create an Account.
-
One or more physical DC devices consisting of interfaces with Internet reachability. An App Stack site is supported only for the F5 IGW, ISV, and Dell Edger 640 Series devices.
-
Resources required per node: Minimum 4 vCPUs, 14 GB RAM, and 100 GB disk storage. For a full listing of the resources required, see the Customer Edge Site Sizing Reference guide. All the nodes in a given CE Site should have the same resources regarding the compute, memory, and disk storage. When deploying in cloud environments, these nodes should use the same instance flavor.
-
Allow traffic from and to the Distributed Cloud public IP addresses to your network and allowlist related domain names. See Firewall and Proxy Server Allowlist Reference guide for the list of IP addresses and domain names.
-
Internet Control Message Protocol (ICMP) needs to be opened between the CE nodes on the Site Local Outside (SLO) interfaces. This is needed to ensure intra-cluster communication checks.
Important: After you deploy the CE Site, the IP address for the SLO interface cannot be changed. Also, the MAC address cannot be changed.
Deploy Site
Perform the steps provided in the following chapters to deploy an App Stack site.
Create App Stack Site Object
Log into F5 Distributed Cloud Console and perform the following steps:
Step 1: Start creating an App Stack site object.
-
In
Multi-Cloud Network Connect
service, navigate toManage
>Site Management
>App Stack Sites
. -
Select
Add App Stack Site
to open the App Stack site configuration form.
Figure: Navigate to App Stack Site Configuration
-
Enter a name in the
Metadata
section for your App Stack site object. -
Optionally, select labels and add a description.
Note: Adding a Fleet label to an App Stack site is not supported.
Step 2: Set the fields for the basic configuration section.
-
From the
Generic Server Certified Hardware
menu, select an option. Theisv-8000-series-voltstack-combo
is selected by default. -
Enter the names of the master nodes in the
List of Master Nodes
field. SelectAdd item
to add more than one entry. Only a single node or 3 master nodes are supported. -
Optionally, enter the names of worker nodes in the
List of Worker Nodes
field. SelectAdd item
to add more than one entry. -
Optionally, enter the following fields:
-
Geographical Address: This derives geographical coordinates.
-
Coordinates: Latitude and longitude.
-
Figure: App Stack Site Basic Configuration Section
Step 3: Configure bond interfaces.
In the Bond Configuration
section, perform the following:
- From the
Select Bond Configuration
menu, selectConfigure Bond Interfaces
.
Figure: Bond Configuration Section
-
Select
Configure
to open bond interface configuration page. -
Select
Add Item
under theBond Devices List
field.
Figure: Bond Devices Section
-
Select on the
Bond Device Name
field and selectSee Common Values
. You can also type a custom name and clickAdd item
to set the device name while also adding it to the existing options. -
Select on the
Member Ethernet Devices
field and selectSee Common Values
for the Ethernet device that is part of this bond. UseAdd item
option to add more devices. -
From the
Select Bond Mode
menu, select the bonding mode.LACP (802.3ad)
is selected by default for the bonding mode with the default LACP packet interval as 30 seconds. You can set the bond mode toActive/Backup
to set the bond members function in active and backup combination.
Figure: Bond Devices List
- Select
Add Item
.
Note: Use the
Add item
option in theBond Devices List
to add more than one bond device.
- Select
Apply
in theBond Devices
page to apply the bond configuration.
Step 4: Perform network configuration.
-
In the
Network Configuration
section, perform the following:-
Select
Custom Network Configuration
from theSelect to Configure Networking
menu. -
Select
Configure
to open the network configuration page.
-
Step 4.1: Perform site local network configuration.
Site local network is applied with default configuration. Perform the following set of steps to apply custom configuration:
-
Select
Configure Site Local Network
from theSelect Configuration For Site Local Network
menu. -
Select
Configure
. -
Optionally, set labels for the
Network Labels
field in theNetwork Metadata
section. -
Optionally, select
Show Advanced Fields
in theStatic Routes
section. -
Select
Manage Static Routes
from theManage Static Routes
menu. -
Select
Add Item
and perform the following:-
Enter IP prefixes for the
IP Prefixes
section. These prefixes will be mapped to the same next-hop and attributes. -
Select
IP Address
orInterface
orDefault Gateway
from theSelect Type of Next Hop
menu and specify IP address or interface accordingly. In the case ofInterface
, you can select an existing interface or create a new interface using the options for the interface field. -
Optionally, select one or more options for the
Attributes
field to set attributes for the static route. -
Select
Add Item
.
-
-
Optionally, enable
Show Advanced Fields
option in theDc Cluster Group
section and perform the following:-
Select
Member of DC Cluster Group
from theSelect DC Cluster Group
menu. -
In the
Member of DC Cluster Group
field, select a DC cluster group. You can also selectCreate New DC Cluster Group
to create a new cluster group. Performing this adds this site to a DC cluster group, enabling full connectivity between the members of the group.
-
Figure: Site Local Network Configuration
- Select
Apply
.
Note: For more information, see the Configure DC Cluster Group guide.
Step 4.2: Perform interface configuration.
Bootstrap interface configuration is applied by default, and it is based on the certified hardware.
Perform the following to apply custom interface configuration:
-
Select
List of Interface
from theSelect Interface Configuration
menu. -
Click
Configure
. This opens another interface list configuration page. -
Select
Add Item
in theList of Interface
table. -
Optionally, enter an interface description and select labels.
-
Select an option from the
Interface Config Type
menu, and set one of the interface types using the following instructions:
Ethernet Interface:
-
Select
Ethernet Interface
and clickConfigure
. This opens Ethernet interface configuration page. -
Select an option from the
Ethernet Device
menu usingSee Common Values
. You can also type a custom name to set the device name while also adding it to the existing options. -
Select
Cluster, All Nodes of the Site
orSpecific Node
from theSelect Configuration for Cluster or Specific Node
menu. In case of specific node, select the specific node from the displayed options of theSpecific Node
field. You can also type a custom name to set the device name while also adding it to the existing options. -
Select
Untagged
orVLAN Id
from theSelect Untagged or VLAN tagged
menu. In case of VLAN ID, enter the VLAN ID in theVLAN Id
field. -
Select an option from the
Select Interface Address Method
menu in theIP Configuration
section. TheDHCP Client
is selected by default. In case you select a DHCP server, clickConfigure
and set the DHCP server configuration per the options displayed on the DHCP server configuration page and clickApply
. This example shows the interface as DHCP client for brevity. -
Select site local outside or site local inside network from the
Select Virtual Network
menu in theVirtual Network
section.Site Local Network (Outside)
is selected by default. -
Select if the interface is primary from the
Select Primary Interface
menu. Default is not a primary interface. Ensure that you set only one interface as primary. -
Select
Apply
.
Dedicated Interface:
-
Select
Dedicated Interface
from theInterface Config Type
menu. -
Select a device name from the
Interface Device
menu usingSee Common Values
. You can also type a custom name to set the device name while also adding it to the existing options. -
Select
Cluster, All Nodes of the Site
orSpecific Node
from theSelect Configuration for Cluster or Specific Node
menu. In case of specific node, select the specific node from the displayed options from theSpecific Node
menu. You can also type a custom name to set the device name while also adding it to the existing options. -
Select if the interface is primary in the
Select Primary Interface
field. Default is not a primary interface. Ensure that you set only one interface as primary. -
Select
Add Item
. -
Optionally, add more than one interface using the
Add item
option in theList of Interface
page. -
Select
Apply
.
Step 4.3: Perform security configuration.
In case of security configuration, the firewall policies and forward policies are disabled by default.
In the Security Configuration
section, perform the following to apply network and forward policies:
-
From the
Firewall Policy
menu, optionally add a firewall policy by selectingActive Firewall Policies
orActive Enhanced Firewall Policies
. Select an existing firewall policy, or selectAdd Item
to create and apply a firewall policy orConfigure
for an enhanced version. -
From the
Forward Proxy
menu, select an option:-
Select
Enable Forward Proxy With Allow All Policy
to allow all requests. -
Select
Enable Forward Proxy and Manage Policies
to apply specific forward proxy policies. Select a forward proxy policy from theForward Proxy Policies
menu. You can also create and apply a new forward proxy policy using theAdd Item
option. You can apply more than one forward proxy policy using theAdd Item
option.
-
-
Optionally, you can configure global networks in the
Global Connections
section and BGP settings in theAdvanced Configuration
section. This example does not include the configuration for these two configuration options.
Step 5: Optionally, perform storage configuration.
Note: Storage configuration for an App Stack site is similar to that of a Fleet.
-
From the
Select to Configure Storage
menu, selectCustom Storage Configuration
. -
Click
Configure
.
Figure: Configure Storage Devices
- Select
List of Storage Interface
from theSelect Storage Interface Configuration
menu, and then clickConfigure
to configure the interface.
Note: See Interfaces for instructions on creating network interfaces. See Multi Node Site Network Setup Using Fleet for instructions on how to configure networking using a Fleet for multi-node sites.
- From the
Select Storage Device Configuration
menu, selectList of Storage Devices
.
Figure: Storage Devices Option
- Click
Add Item
. This opens storage devices configuration page.
Figure: Storage Devices Parameters
-
Enter a name in the
Storage Device
field. Ensure that this name corresponds to the class in which the storage device falls. The classes are used by vK8s for storage related actions. -
Select an option from the
Select Storage Device to Configure
menu and perform one of the following based on the option you chose:
NetApp Trident
-
Select an option from the
Select NetApp Trident Backend
menu. TheONTAP NAS
is selected by default. -
Select an option from the
Backend Management LIF
menu. TheBackend Management LIF IP Address
is selected by default. Enter an IP address for the backend management logical interface in theBackend Management LIF IP Address
field. In case you select the name option, enter the backend management interface name.
Figure: NetApp Device
-
Select an option from the
Backend Data LIF
menu. TheBackend Data LIF IP Address
is selected by default. Enter an IP address for the backend data interface in theBackend Data LIF IP Address
field. In case you select the name option, enter the backend data interface name. -
Enter a username in the
Username
field. ClickConfigure
for thePassword
field. Enter your password in theSecret
page and clickBlindfold
. Wait for the blindfold process to complete encrypting your password, and then clickApply
.
Figure: NetApp Device Password
-
Enter a certificate in the
Client Certificate
field. ClickConfigure
for theClient Certificate
field. Enter the text for your secret in theSecret
page and clickBlindfold
. Wait for the blindfold process to complete encrypting your password, and then clickApply
. -
Enter a certificate in the
Trusted CA Certificate
field. -
Enter CIDR for your K8s nodes in the
Auto Export CIDRs
field in case auto export policy is enabled for your storage device. -
If you are configuring virtual storage, then in the
Virtual Storage Pools
section, enter a label and region for the storage, and clickAdd item
one or more times to add pool labels and pool zones.
Figure: NetApp Device AutoExport CIDRs
- Click
Add Item
.
Pure Storage Service Orchestrator
- Enter a cluster identifier in the
Cluster ID
field. This is used to identify the volumes used by the datastore. Alphanumeric characters and underscores are allowed.
Figure: Pure Storage Orchestrator Device
Note: Unique cluster ID is required for multiple K8s clusters using the same storage device.
- Click
Add Item
to add a flash array endpoint.
Figure: Pure Storage Flash Arrays
- In the
Management Endpoint IP Address
field, enter an IP address.
Figure: Flash Array Endpoint
-
Click
Configure
under theAPI Token
field. -
Enter the token in the secret field and click
Blindfold
. -
Click
Apply
after the blindfold encryption is completed. -
Optionally, select labels for this endpoint.
-
Click
Add Item
. -
Click
Apply
. -
Click
Configure
under theFlash Blade
field. -
Click
Add Item
to add a flash blade endpoint.
Figure: Pure Storage Flash Blade
- Enter the IP address in the
Management Endpoint IP Address
field.
Figure: Pure Storage Flash Blade Endpoint
-
Click
Configure
under theAPI Token
field. -
Enter the token in the secret field and then click
Blindfold
. -
Click
Apply
after the blindfold encryption is completed. -
Enter the IP address in the
NFS IP Address
field. -
Optionally, add labels for this endpoint.
-
Click
Add Item
.
Note: You can change the management or NFS endpoints to specify management endpoint name or NFS DNS name.
- Click
Apply
.
Custom Storage
The custom storage classes option is used for storage devices or external storages, which are deployed outside F5 Distributed Cloud Services. For instance, the option allows you to configure custom storage classes for AWS, GCP, etc.
-
Select
Custom Storage
from theSelect Storage Device to Configure
menu. -
Click
Add Item
.
Figure: Custom Storage Device
Note: You can add multiple devices using the
Add item
option.
HPE Storage
-
Select
HPE Storage
for theSelect Storage Device to Configure
field. -
In the
Storage Server Name
field, enter a name. -
In the
Storage Server IP address
field, enter an IP address. -
In the
Storage server Port
field, enter a port number. -
In the
Username
field, enter the username used to connect to the HPE storage device. -
To configure the password, click
Configure
. Then perform the following:-
For the
Blindfolded Secret
option, complete the configuration by entering the secret text to blindfold. -
For the
Clear Secret
option, enter the secret text in plaintext format or Base64. -
Click
Apply
. -
Click
Apply
again to complete configuration.
-
Storage Classes
-
From the
Select Configuration for Storage Classes
menu, selectAdd Custom Storage Class
. -
Click
Add Item
.
Note: You can use default storage classes supported in K8s or you can customize the classes. In case you are using default classes, ensure that the storage device names correspond to the K8s classes.
-
Enter a name for this storage class in the
Storage Class Name
field. -
Check the
Default Storage Class
checkbox if this will be the default storage class. -
Choose an option from the
Select Storage Class Configuration
drop-down menu.
NetApp Trident
- Click
Add Item
.
Pure Storage Service Orchestrator
-
Select an option from the
Backend
menu. Theblock
option is selected by default. -
Optionally, enter IOPS and bandwidth limits in their respective fields.
-
Click
Add Item
.
Figure: Pure Storage Orchestrator Class
Custom Storage Class
-
Select
Add Custom Storage Class
from theSelect Configuration for Storage Classes
menu. -
Click
Add Item
. This opens theStorage Class Parameters
page. -
Enter a name for the
Storage Class Name
field. This name will appear in K8s. -
Enter a name in the
Storage Device
field. This is the storage device that will be used by this class, as entered in Step 3. -
Optionally, enter a storage class description.
-
Optionally, check the
Default Storage Class
box to make this storage class the default for the K8s cluster. -
Select
Custom Storage
from theSelect Storage Class Configuration
menu. -
Enter the storage class YAML. It must have the configuration of:
apiVersion: storage.k8s.io/v1
kind: StorageClass
...
Figure: Custom Storage Class Parameters
-
Enter a
Reclaim Policy
. -
Optionally, check the
Allow Volume Expansion
box. -
Optionally, enter generic/advanced parameters.
-
Click
Add Item
to save the storage class parameters. -
Click
Apply
to save the custom storage configuration.
Step 6: Perform advanced configuration.
- In the
Advanced Configuration
section, enable theShow Advanced Fields
option.
Figure: Advanced Configuration
-
Optionally, select
GPU Enabled
orvGPU Enabled
from theEnable/Disable GPU
menu. This enables GPU capability for the site hardware. -
Optionally, configure managed K8s for your site per the following guidelines:
-
Select
Enable Site Local K8s API access
from theSite Local K8s API access
menu. -
Click on the
Enable Site Local K8s API access
field and select a K8s cluster object from the list. You can also selectCreate new k8s Cluster
to create and apply the K8s cluster object.
-
-
Optionally, enable logs streaming and either select a log receiver or create a new log receiver.
-
Optionally, select a USB device policy. You can deny all USB devices, allow all USB devices, or allow specific USB devices.
-
Optionally, specify a Distributed Cloud Services software version. The default is the
Latest SW Version
. -
Optionally, specify an operating system version. The default is the
Latest OS Version
.
Note: The advanced configuration also includes managed K8s configuration. This is an important step if you want to enable managed K8s access. This is possible only at the time of creating the App Stack site and cannot be enabled later by updating the App Stack site.
-
To enable SR-IOV for your site:
-
From the
SR-IOV interfaces
menu, selectCustom SR-IOV interfaces Configuration
. -
Click
Add Item
.
-
Figure: Enable Custom SR-IOV Interface
-
In the
Name of physical interface
field, enter a name for the physical adapter that will use a virtual function (VF). -
In the
Number of virtual functions
field, enter the number of VFs used for the physical interface. -
If you are using SR-IOV with either a virtual machine (VNF) or a Data Plane Development Kit (DPDK) application in a pod (CNF): In the
Number of virtual functions reserved for vfio
field, enter the number of virtual functions reserved for use with VNFs and DPDK-based CNFs. VNF (VM) needs to usevfio
even if it is not running a DPDK application. The number of VFs reserved forvfio
cannot be more than the total number of VFs configured.
Note: If you use SR-IOV with a virtual machine or DPDK-based pod, the network name in the VM/pod manifest file should have
-vfio
appended to the network name (subnet).
- Click
Apply
to set the SR-IOV interface configuration.
Figure: Set Number of Virtual Functions
Note: After you enable this SR-IOV interface configuration option, your site will reboot if it is an existing site, provisioning new VFs. For new sites, the provisioning process will occur during initial site deployment. It may take several minutes for an existing site to reboot after enabling this feature.
Step 7: Complete creating the App Stack site.
Select Save and Exit
to complete creating the App Stack site.
Note: You can also configure multiple interfaces for Virtual Machines (VM) or containers running in a K8s cluster within an App Stack Site. For instructions, see Create Workloads with Multiple Network Interfaces.
Perform Site Registration
After creating the App Stack site object in Console, the site shows up in Console with Waiting for Registration
status. Install the nodes and ensure that the cluster name and host name for your nodes match with the App Stack site name and node name per the Basic Configuration
section of App Stack site you configured.
Note: See Create VMware Site, Create KVM Site, and Create Baremetal Site for node installation instructions.
Perform registration per the following instructions:
-
Navigate to
Manage
>Site Management
>Registrations
. -
Choose your site from the list of sites displayed under the
Pending Registrations
tab. -
Approve the option (blue checkmark).
-
Ensure that the cluster name and hostname is matching with those of the App Stack site.
-
Select
Accept
to complete registration and the site turns online.