Create App Stack Site
Objective
This document provides instructions on how to install F5® Distributed Cloud single-node or multi-node F5 Distributed Cloud App Stack sites on clusters of private Data Center (DC) devices. An App Stack site is a Distributed Cloud Customer Edge (CE) site engineered specifically for the purpose of managing DC clusters. To know more about Distributed Cloud sites, see Site. The F5 Distributed Cloud Platform also supports creating physical K8s (also called managed K8s) clusters for managing your applications on the created App Stack sites. See Create and Deploy Managed K8s for instruction on creating the managed K8s clusters.
Clusters of private DCs that are set up on geographically separated locations require an established direct communication link with each member of the cluster. An App Stack site helps achieve that while ensuring safety and reliability. You can define a DC cluster group in an App Stack site to which member nodes can be added dynamically. Such added members become part of the cluster group and automatically get full mesh connectivity with all other members of the group.
Using the instructions provided in this document, you can deploy a single-node or multi-node App Stack site on your private DC, define a cluster group, and add members to the cluster group. You can also enable the local API access for the managed K8s running on the sites so that it can be used like regular K8s.
App Stack Site vs Other CE Sites
The App Stack site differs from the regular Distributed Cloud CE site that can be deployed with App Stack. An App Stack site simplifies the task of managing distributed apps across DC clusters while offering the option to establish full mesh connectivity among themselves. In the case of regular sites with App Stack functionality, you will need to explicitly create and manage site mesh groups for that purpose. With an App Stack site, you just need to add the site to the DC cluster group and connectivity is automatically established.
Also, an App Stack site provides support to manage local K8s while controlling communication between services of different namespaces. While regular Distributed Cloud sites provide virtual K8s (vK8s), the vK8s is per namespace per site, and managed local K8s is deployed across all namespaces of an App Stack site with a single kubeconfig. Therefore, if you require the ability to manage distributed apps across your private DC clusters with full mesh connectivity, an App Stack site with managed K8s is useful. For other requirements, you can use regular CE sites.
Reference Architecture
This reference architecture shows the recommended practice for setting up geographically distributed DC clusters using App Stack sites, privately connecting them using DC cluster group, and enabling managed K8s for deploying apps in those clusters.
The following image shows a sample reference architecture of the App Stack DC cluster site.

Figure: App Stack DC Cluster Site Reference Architecture
The DC clusters represent App Stack sites. These consist of interfaces towards storage network, interfaces in LACP towards DC switches, and dedicated or non-dedicated management interfaces. An App Stack site supports storage classes with dynamic PVCs. Bonded interfaces are also supported with bond interfaces for regular use in the case of a dedicated management interface. In the case of non-dedicated management interfaces, the fallback interface for internet connectivity in a bonded interface which must be assigned with a static IP address. The example image also shows TOR switches that can be connected with a single bond interface using MC-LAG or with 2 bond interfaces at L3 level. An App Stack site also supports establishing BGP peering with the TORs.
Bonding of interfaces is supported in LACP active/active or active/backup modes.
The following image shows sample reference deployment of an App Stack DC cluster sites that are deployed at different geographies.

Figure: App Stack DC Cluster Site Reference Deployment
The apps deployed in the managed K8s clusters in these sites can directly communicate with each other in CI/CD use cases or users can obtain access to managed K8s API using the Kubeconfig.
The following image shows service interaction for service deployed in different DC cluster sites:

Figure: Service Communication Between Clusters of Different DCs
The following image shows service interaction for service deployed in the same DC cluster site:

Figure: Service Communication Between Clusters of Same DC
The following image shows service deployment in active/active and active/standby mode for remote clients:

Figure: Service Deployment in Active-Active and Active-Standby Modes
Note: For instruction on accessing managed K8s clusters and deploying apps, see Create and Deploy Managed K8s guide.
Prerequisites
-
An F5 Distributed Cloud Account. If you do not have an account, see Getting Started with Console.
-
One or more physical DC devices consisting of interfaces with Internet reachability. An App Stack site is supported only for the F5 IGW, ISV, and Dell Edger 640 Series devices.
-
Resources required per node: Minimum 8 vCPUs, 32 GB RAM, and 100 GB disk storage. For a full listing of the resources required, see the Customer Edge Site Sizing Reference guide. All the nodes in a given CE Site should have the same resources regarding the compute, memory, and disk storage. When deploying in cloud environments, these nodes should use the same instance flavor.
-
Allow traffic from and to the Distributed Cloud public IP addresses to your network and allowlist related domain names. See F5 Customer Edge IP Address and Domain Reference for Firewall or Proxy Settings guide for the list of IP addresses and domain names.
-
Internet Control Message Protocol (ICMP) needs to be opened between the CE nodes on the Site Local Outside (SLO) interfaces. This is needed to ensure intra-cluster communication checks.
Important: After you deploy the CE Site, the IP address for the SLO interface cannot be changed. Also, the MAC address cannot be changed.
Deploy Site
Perform the steps provided in the following chapters to deploy an App Stack site.
Create App Stack Site Object
Log into F5 Distributed Cloud Console and perform the following steps:
Step 1: Start creating an App Stack site object.
-
In
Multi-Cloud Network Connectworkspace, navigate toManage>Site Management>App Stack Sites. -
Select
Add App Stack Siteto open the App Stack site configuration form.

Figure: Navigate to App Stack Site Configuration
-
Enter a name in the
Metadatasection for your App Stack site object. -
Optionally, select labels and add a description.
Note: Adding a Fleet label to an App Stack site is not supported.
Step 2: Set the fields for the basic configuration section.
-
From the
Generic Server Certified Hardwaremenu, select an option. Theisv-8000-series-voltstack-combois selected by default. -
Enter the names of the master nodes in the
List of Master Nodesfield. SelectAdd itemto add more than one entry. Only a single node or 3 master nodes are supported. -
Optionally, enter the names of worker nodes in the
List of Worker Nodesfield. SelectAdd itemto add more than one entry. -
Optionally, enter the following fields:
-
Geographical Address: This derives geographical coordinates.
-
Coordinates: Latitude and longitude.
-

Figure: App Stack Site Basic Configuration Section
Step 3: Configure bond interfaces.
In the Bond Configuration section, perform the following:
- From the
Select Bond Configurationmenu, selectConfigure Bond Interfaces.

Figure: Bond Configuration Section
-
Select
Configureto open bond interface configuration page. -
Select
Add Itemunder theBond Devices Listfield.

Figure: Bond Devices Section
-
Select on the
Bond Device Namefield and selectSee Common Values. You can also type a custom name and clickAdd itemto set the device name while also adding it to the existing options. -
Select on the
Member Ethernet Devicesfield and selectSee Common Valuesfor the Ethernet device that is part of this bond. UseAdd itemoption to add more devices. -
From the
Select Bond Modemenu, select the bonding mode.LACP (802.3ad)is selected by default for the bonding mode with the default LACP packet interval as 30 seconds. You can set the bond mode toActive/Backupto set the bond members function in active and backup combination.

Figure: Bond Devices List
- Select
Add Item.
Note: Use the
Add itemoption in theBond Devices Listto add more than one bond device.
- Select
Applyin theBond Devicespage to apply the bond configuration.
Step 4: Perform network configuration.
-
In the
Network Configurationsection, perform the following:-
Select
Custom Network Configurationfrom theSelect to Configure Networkingmenu. -
Select
Configureto open the network configuration page.
-
Step 4.1: Perform site local network configuration.
Site local network is applied with default configuration. Perform the following set of steps to apply custom configuration:
-
Select
Configure Site Local Networkfrom theSelect Configuration For Site Local Networkmenu. -
Select
Configure. -
Optionally, set labels for the
Network Labelsfield in theNetwork Metadatasection. -
Optionally, select
Show Advanced Fieldsin theStatic Routessection. -
Select
Manage Static Routesfrom theManage Static Routesmenu. -
Select
Add Itemand perform the following:-
Enter IP prefixes for the
IP Prefixessection. These prefixes will be mapped to the same next-hop and attributes. -
Select
IP AddressorInterfaceorDefault Gatewayfrom theSelect Type of Next Hopmenu and specify IP address or interface accordingly. In the case ofInterface, you can select an existing interface or create a new interface using the options for the interface field. -
Optionally, select one or more options for the
Attributesfield to set attributes for the static route. -
Select
Add Item.
-
-
Optionally, enable
Show Advanced Fieldsoption in theDc Cluster Groupsection and perform the following:-
Select
Member of DC Cluster Groupfrom theSelect DC Cluster Groupmenu. -
In the
Member of DC Cluster Groupfield, select a DC cluster group. You can also selectCreate New DC Cluster Groupto create a new cluster group. Performing this adds this site to a DC cluster group, enabling full connectivity between the members of the group.
-

Figure: Site Local Network Configuration
- Select
Apply.
Note: For more information, see the Configure DC Cluster Group guide.
Step 4.2: Perform interface configuration.
Bootstrap interface configuration is applied by default, and it is based on the certified hardware.
Perform the following to apply custom interface configuration:
-
Select
List of Interfacefrom theSelect Interface Configurationmenu. -
Click
Configure. This opens another interface list configuration page. -
Select
Add Itemin theList of Interfacetable. -
Optionally, enter an interface description and select labels.
-
Select an option from the
Interface Config Typemenu, and set one of the interface types using the following instructions:
Ethernet Interface:
-
Select
Ethernet Interfaceand clickConfigure. This opens Ethernet interface configuration page. -
Select an option from the
Ethernet Devicemenu usingSee Common Values. You can also type a custom name to set the device name while also adding it to the existing options. -
Select
Cluster, All Nodes of the SiteorSpecific Nodefrom theSelect Configuration for Cluster or Specific Nodemenu. In case of specific node, select the specific node from the displayed options of theSpecific Nodefield. You can also type a custom name to set the device name while also adding it to the existing options. -
Select
UntaggedorVLAN Idfrom theSelect Untagged or VLAN taggedmenu. In case of VLAN ID, enter the VLAN ID in theVLAN Idfield. -
Select an option from the
Select Interface Address Methodmenu in theIP Configurationsection. TheDHCP Clientis selected by default. In case you select a DHCP server, clickConfigureand set the DHCP server configuration per the options displayed on the DHCP server configuration page and clickApply. This example shows the interface as DHCP client for brevity. -
Select site local outside or site local inside network from the
Select Virtual Networkmenu in theVirtual Networksection.Site Local Network (Outside)is selected by default. -
Select if the interface is primary from the
Select Primary Interfacemenu. Default is not a primary interface. Ensure that you set only one interface as primary. -
Select
Apply.
Dedicated Interface:
-
Select
Dedicated Interfacefrom theInterface Config Typemenu. -
Select a device name from the
Interface Devicemenu usingSee Common Values. You can also type a custom name to set the device name while also adding it to the existing options. -
Select
Cluster, All Nodes of the SiteorSpecific Nodefrom theSelect Configuration for Cluster or Specific Nodemenu. In case of specific node, select the specific node from the displayed options from theSpecific Nodemenu. You can also type a custom name to set the device name while also adding it to the existing options. -
Select if the interface is primary in the
Select Primary Interfacefield. Default is not a primary interface. Ensure that you set only one interface as primary. -
Select
Add Item. -
Optionally, add more than one interface using the
Add itemoption in theList of Interfacepage. -
Select
Apply.
Step 4.3: Perform security configuration.
In case of security configuration, the firewall policies and forward policies are disabled by default.
In the Security Configuration section, perform the following to apply network and forward policies:
-
From the
Firewall Policymenu, optionally add a firewall policy by selectingActive Firewall PoliciesorActive Enhanced Firewall Policies. Select an existing firewall policy, or selectAdd Itemto create and apply a firewall policy orConfigurefor an enhanced version. -
From the
Forward Proxymenu, select an option:-
Select
Enable Forward Proxy With Allow All Policyto allow all requests. -
Select
Enable Forward Proxy and Manage Policiesto apply specific forward proxy policies. Select a forward proxy policy from theForward Proxy Policiesmenu. You can also create and apply a new forward proxy policy using theAdd Itemoption. You can apply more than one forward proxy policy using theAdd Itemoption.
-
-
Optionally, you can configure global networks in the
Global Connectionssection and BGP settings in theAdvanced Configurationsection. This example does not include the configuration for these two configuration options.
Step 5: Optionally, perform storage configuration.
Note: Storage configuration for an App Stack site is similar to that of a Fleet.
-
From the
Select to Configure Storagemenu, selectCustom Storage Configuration. -
Click
Configure.

Figure: Configure Storage Devices
- Select
List of Storage Interfacefrom theSelect Storage Interface Configurationmenu, and then clickConfigureto configure the interface.
Note: See Interfaces for instructions on creating network interfaces. See Create Fleet for instructions on how to configure networking using a Fleet for multi-node sites.
- From the
Select Storage Device Configurationmenu, selectList of Storage Devices.

Figure: Storage Devices Option
- Click
Add Item. This opens storage devices configuration page.

Figure: Storage Devices Parameters
-
Enter a name in the
Storage Devicefield. Ensure that this name corresponds to the class in which the storage device falls. The classes are used by vK8s for storage related actions. -
Select an option from the
Select Storage Device to Configuremenu and perform one of the following based on the option you chose:
NetApp Trident
-
Select an option from the
Select NetApp Trident Backendmenu. TheONTAP NASis selected by default. -
Select an option from the
Backend Management LIFmenu. TheBackend Management LIF IP Addressis selected by default. Enter an IP address for the backend management logical interface in theBackend Management LIF IP Addressfield. In case you select the name option, enter the backend management interface name.

Figure: NetApp Device
-
Select an option from the
Backend Data LIFmenu. TheBackend Data LIF IP Addressis selected by default. Enter an IP address for the backend data interface in theBackend Data LIF IP Addressfield. In case you select the name option, enter the backend data interface name. -
Enter a username in the
Usernamefield. ClickConfigurefor thePasswordfield. Enter your password in theSecretpage and clickBlindfold. Wait for the blindfold process to complete encrypting your password, and then clickApply.

Figure: NetApp Device Password
-
Enter a certificate in the
Client Certificatefield. ClickConfigurefor theClient Certificatefield. Enter the text for your secret in theSecretpage and clickBlindfold. Wait for the blindfold process to complete encrypting your password, and then clickApply. -
Enter a certificate in the
Trusted CA Certificatefield. -
Enter CIDR for your K8s nodes in the
Auto Export CIDRsfield in case auto export policy is enabled for your storage device. -
If you are configuring virtual storage, then in the
Virtual Storage Poolssection, enter a label and region for the storage, and clickAdd itemone or more times to add pool labels and pool zones.

Figure: NetApp Device AutoExport CIDRs
- Click
Add Item.
Pure Storage Service Orchestrator
- Enter a cluster identifier in the
Cluster IDfield. This is used to identify the volumes used by the datastore. Alphanumeric characters and underscores are allowed.

Figure: Pure Storage Orchestrator Device
Note: Unique cluster ID is required for multiple K8s clusters using the same storage device.
- Click
Add Itemto add a flash array endpoint.

Figure: Pure Storage Flash Arrays
- In the
Management Endpoint IP Addressfield, enter an IP address.

Figure: Flash Array Endpoint
-
Click
Configureunder theAPI Tokenfield. -
Enter the token in the secret field and click
Blindfold. -
Click
Applyafter the blindfold encryption is completed. -
Optionally, select labels for this endpoint.
-
Click
Add Item. -
Click
Apply. -
Click
Configureunder theFlash Bladefield. -
Click
Add Itemto add a flash blade endpoint.

Figure: Pure Storage Flash Blade
- Enter the IP address in the
Management Endpoint IP Addressfield.

Figure: Pure Storage Flash Blade Endpoint
-
Click
Configureunder theAPI Tokenfield. -
Enter the token in the secret field and then click
Blindfold. -
Click
Applyafter the blindfold encryption is completed. -
Enter the IP address in the
NFS IP Addressfield. -
Optionally, add labels for this endpoint.
-
Click
Add Item.
Note: You can change the management or NFS endpoints to specify management endpoint name or NFS DNS name.
- Click
Apply.
Custom Storage
The custom storage classes option is used for storage devices or external storages, which are deployed outside F5 Distributed Cloud Services. For instance, the option allows you to configure custom storage classes for AWS, GCP, etc.
-
Select
Custom Storagefrom theSelect Storage Device to Configuremenu. -
Click
Add Item.

Figure: Custom Storage Device
Note: You can add multiple devices using the
Add itemoption.
HPE Storage
-
Select
HPE Storagefor theSelect Storage Device to Configurefield. -
In the
Storage Server Namefield, enter a name. -
In the
Storage Server IP addressfield, enter an IP address. -
In the
Storage server Portfield, enter a port number. -
In the
Usernamefield, enter the username used to connect to the HPE storage device. -
To configure the password, click
Configure. Then perform the following:-
For the
Blindfolded Secretoption, complete the configuration by entering the secret text to blindfold. -
For the
Clear Secretoption, enter the secret text in plaintext format or Base64. -
Click
Apply. -
Click
Applyagain to complete configuration.
-
Storage Classes
-
From the
Select Configuration for Storage Classesmenu, selectAdd Custom Storage Class. -
Click
Add Item.
Note: You can use default storage classes supported in K8s or you can customize the classes. In case you are using default classes, ensure that the storage device names correspond to the K8s classes.
-
Enter a name for this storage class in the
Storage Class Namefield. -
Check the
Default Storage Classcheckbox if this will be the default storage class. -
Choose an option from the
Select Storage Class Configurationdrop-down menu.
NetApp Trident
- Click
Add Item.
Pure Storage Service Orchestrator
-
Select an option from the
Backendmenu. Theblockoption is selected by default. -
Optionally, enter IOPS and bandwidth limits in their respective fields.
-
Click
Add Item.

Figure: Pure Storage Orchestrator Class
Custom Storage Class
-
Select
Add Custom Storage Classfrom theSelect Configuration for Storage Classesmenu. -
Click
Add Item. This opens theStorage Class Parameterspage. -
Enter a name for the
Storage Class Namefield. This name will appear in K8s. -
Enter a name in the
Storage Devicefield. This is the storage device that will be used by this class, as entered in Step 3. -
Optionally, enter a storage class description.
-
Optionally, check the
Default Storage Classbox to make this storage class the default for the K8s cluster. -
Select
Custom Storagefrom theSelect Storage Class Configurationmenu. -
Enter the storage class YAML. It must have the configuration of:
apiVersion: storage.k8s.io/v1
kind: StorageClass
...

Figure: Custom Storage Class Parameters
-
Enter a
Reclaim Policy. -
Optionally, check the
Allow Volume Expansionbox. -
Optionally, enter generic/advanced parameters.
-
Click
Add Itemto save the storage class parameters. -
Click
Applyto save the custom storage configuration.
Step 6: Perform advanced configuration.
- In the
Advanced Configurationsection, enable theShow Advanced Fieldsoption.

Figure: Advanced Configuration
-
Optionally, select
GPU EnabledorvGPU Enabledfrom theEnable/Disable GPUmenu. This enables GPU capability for the site hardware. -
Optionally, configure managed K8s for your site per the following guidelines:
-
Select
Enable Site Local K8s API accessfrom theSite Local K8s API accessmenu. -
Click on the
Enable Site Local K8s API accessfield and select a K8s cluster object from the list. You can also selectCreate new k8s Clusterto create and apply the K8s cluster object.
-
-
Optionally, enable logs streaming and either select a log receiver or create a new log receiver.
-
Optionally, select a USB device policy. You can deny all USB devices, allow all USB devices, or allow specific USB devices.
-
Optionally, specify a Distributed Cloud Services software version. The default is the
Latest SW Version. -
Optionally, specify an operating system version. The default is the
Latest OS Version.
Note: The advanced configuration also includes managed K8s configuration. This is an important step if you want to enable managed K8s access. This is possible only at the time of creating the App Stack site and cannot be enabled later by updating the App Stack site.
-
To enable SR-IOV for your site:
-
From the
SR-IOV interfacesmenu, selectCustom SR-IOV interfaces Configuration. -
Click
Add Item.
-

Figure: Enable Custom SR-IOV Interface
-
In the
Name of physical interfacefield, enter a name for the physical adapter that will use a virtual function (VF). -
In the
Number of virtual functionsfield, enter the number of VFs used for the physical interface. -
If you are using SR-IOV with either a virtual machine (VNF) or a Data Plane Development Kit (DPDK) application in a pod (CNF): In the
Number of virtual functions reserved for vfiofield, enter the number of virtual functions reserved for use with VNFs and DPDK-based CNFs. VNF (VM) needs to usevfioeven if it is not running a DPDK application. The number of VFs reserved forvfiocannot be more than the total number of VFs configured.
Note: If you use SR-IOV with a virtual machine or DPDK-based pod, the network name in the VM/pod manifest file should have
-vfioappended to the network name (subnet).
- Click
Applyto set the SR-IOV interface configuration.

Figure: Set Number of Virtual Functions
Note: After you enable this SR-IOV interface configuration option, your site will reboot if it is an existing site, provisioning new VFs. For new sites, the provisioning process will occur during initial site deployment. It may take several minutes for an existing site to reboot after enabling this feature.
Step 7: Complete creating the App Stack site.
Select Save and Exit to complete creating the App Stack site.
Note: You can also configure multiple interfaces for Virtual Machines (VM) or containers running in a K8s cluster within an App Stack Site. For instructions, see Create Workloads with Multiple Network Interfaces.
Perform Site Registration
After creating the App Stack site object in Console, the site shows up in Console with Waiting for Registration status. Install the nodes and ensure that the cluster name and host name for your nodes match with the App Stack site name and node name per the Basic Configuration section of App Stack site you configured.
Note: See Create VMware Site, Create KVM Site, and Create Baremetal Site for node installation instructions.
Perform registration per the following instructions:
-
Navigate to
Manage>Site Management>Registrations. -
Choose your site from the list of sites displayed under the
Pending Registrationstab. -
Approve the option (blue checkmark).
-
Ensure that the cluster name and hostname is matching with those of the App Stack site.
-
Select
Acceptto complete registration and the site turns online.
Concepts
- System Overview
- Core Concepts
- Networking
- F5 Distributed Cloud - Customer Edge
- F5 Distributed Cloud Site