Create App Stack Site

Objective

This document provides instructions on how to install F5® Distributed Cloud single-node or multi-node F5 Distributed Cloud App Stack sites on clusters of private Data Center (DC) devices. An App Stack site is a Distributed Cloud Customer Edge (CE) site engineered specifically for the purpose of managing DC clusters. To know more about Distributed Cloud sites, see Site. The F5 Distributed Cloud Platform also supports creating physical K8s (also called managed K8s) clusters for managing your applications on the created App Stack sites. See Create and Deploy Managed K8s for instruction on creating the managed K8s clusters.

Clusters of private DCs that are set up on geographically separated locations require an established direct communication link with each member of the cluster. An App Stack site helps achieve that while ensuring safety and reliability. You can define a DC cluster group in an App Stack site to which member nodes can be added dynamically. Such added members become part of the cluster group and automatically get full mesh connectivity with all other members of the group.

Using the instructions provided in this document, you can deploy a single-node or multi-node App Stack site on your private DC, define a cluster group, and add members to the cluster group. You can also enable the local API access for the managed K8s running on the sites so that it can be used like regular K8s.


App Stack Site vs Other CE Sites

The App Stack site differs from the regular Distributed Cloud CE site that can be deployed with App Stack. An App Stack site simplifies the task of managing distributed apps across DC clusters while offering the option to establish full mesh connectivity among themselves. In the case of regular sites with App Stack functionality, you will need to explicitly create and manage site mesh groups for that purpose. With an App Stack site, you just need to add the site to the DC cluster group and connectivity is automatically established.

Also, an App Stack site provides support to manage local K8s while controlling communication between services of different namespaces. While regular Distributed Cloud sites provide virtual K8s (vK8s), the vK8s is per namespace per site, and managed local K8s is deployed across all namespaces of an App Stack site with a single kubeconfig. Therefore, if you require the ability to manage distributed apps across your private DC clusters with full mesh connectivity, an App Stack site with managed K8s is useful. For other requirements, you can use regular CE sites.


Reference Architecture

This reference architecture shows the recommended practice for setting up geographically distributed DC clusters using App Stack sites, privately connecting them using DC cluster group, and enabling managed K8s for deploying apps in those clusters.

The following image shows a sample reference architecture of the App Stack DC cluster site.

Figure
Figure: App Stack DC Cluster Site Reference Architecture

The DC clusters represent App Stack sites. These consist of interfaces towards storage network, interfaces in LACP towards DC switches, and dedicated or non-dedicated management interfaces. An App Stack site supports storage classes with dynamic PVCs. Bonded interfaces are also supported with bond interfaces for regular use in the case of a dedicated management interface. In the case of non-dedicated management interfaces, the fallback interface for internet connectivity in a bonded interface which must be assigned with a static IP address. The example image also shows TOR switches that can be connected with a single bond interface using MC-LAG or with 2 bond interfaces at L3 level. An App Stack site also supports establishing BGP peering with the TORs.

Bonding of interfaces is supported in LACP active/active or active/backup modes.

The following image shows sample reference deployment of an App Stack DC cluster sites that are deployed at different geographies.

Figure
Figure: App Stack DC Cluster Site Reference Deployment

The apps deployed in the managed K8s clusters in these sites can directly communicate with each other in CI/CD use cases or users can obtain access to managed K8s API using the Kubeconfig.

The following image shows service interaction for service deployed in different DC cluster sites:

Figure
Figure: Service Communication Between Clusters of Different DCs

The following image shows service interaction for service deployed in the same DC cluster site:

Figure
Figure: Service Communication Between Clusters of Same DC

The following image shows service deployment in active/active and active/standby mode for remote clients:

Figure
Figure: Service Deployment in Active-Active and Active-Standby Modes

Note: For instruction on accessing managed K8s clusters and deploying apps, see Create and Deploy Managed K8s guide.


Prerequisites

  • An F5 Distributed Cloud Account. If you do not have an account, see Create an Account.

  • One or more physical DC devices consisting of interfaces with internet reachability. An App Stack site is supported only for the F5 IGW, ISV, and Dell Edger 640 Series devices.

  • Resources required per node: Minimum 4 vCPUs and 14 GB RAM.


Deploy Site

Perform the steps provided in the following chapters to deploy an App Stack site.

Create App Stack Site Object

Log into F5 Distributed Cloud Console and perform the following steps:

Step 1: Start creating an App Stack site object.
  • In Multi-Cloud Network Connect service, navigate to Manage > Site Management > App Stack Sites.

  • Select Add App Stack Site to open the App Stack site configuration form.

Figure
Figure: Navigate to App Stack Site Configuration
  • Enter a name in the Metadata section for your App Stack site object.

  • Optionally, select labels and add a description.

Note: Adding a Fleet label to an App Stack site is not supported.

Step 2: Set the fields for the basic configuration section.
  • From the Generic Server Certified Hardware menu, select an option. The isv-8000-series-voltstack-combo is selected by default.

  • Enter the names of the master nodes in the List of Master Nodes field. Select Add item to add more than one entry. Only a single node or 3 master nodes are supported.

  • Optionally, enter the names of worker nodes in the List of Worker Nodes field. Select Add item to add more than one entry.

  • Optionally, enter the following fields:

    • Geographical Address: This derives geographical coordinates.

    • Coordinates: Latitude and longitude.

Figure
Figure: App Stack Site Basic Configuration Section
Step 3: Configure bond interfaces.

In the Bond Configuration section, perform the following:

  • From the Select Bond Configuration menu, select Configure Bond Interfaces.
Figure
Figure: Bond Configuration Section
  • Select Configure to open bond interface configuration page.

  • Select Add Item under the Bond Devices List field.

Figure
Figure: Bond Devices Section
  • Select on the Bond Device Name field and select See Common Values. You can also type a custom name and click Add item to set the device name while also adding it to the existing options.

  • Select on the Member Ethernet Devices field and select See Common Values for the Ethernet device that is part of this bond. Use Add item option to add more devices.

  • From the Select Bond Mode menu, select the bonding mode. LACP (802.3ad) is selected by default for the bonding mode with the default LACP packet interval as 30 seconds. You can set the bond mode to Active/Backup to set the bond members function in active and backup combination.

Figure
Figure: Bond Devices List
  • Select Add Item.

Note: Use the Add item option in the Bond Devices List to add more than one bond device.

  • Select Apply in the Bond Devices page to apply the bond configuration.
Step 4: Perform network configuration.
  • In the Network Configuration section, perform the following:

    • Select Custom Network Configuration from the Select to Configure Networking menu.

    • Select Configure to open the network configuration page.

Step 4.1: Perform site local network configuration.

Site local network is applied with default configuration. Perform the following set of steps to apply custom configuration:

  • Select Configure Site Local Network from the Select Configuration For Site Local Network menu.

  • Select Configure.

  • Optionally, set labels for the Network Labels field in the Network Metadata section.

  • Optionally, select Show Advanced Fields in the Static Routes section.

  • Select Manage Static Routes from the Manage Static Routes menu.

  • Select Add Item and perform the following:

    • Enter IP prefixes for the IP Prefixes section. These prefixes will be mapped to the same next-hop and attributes.

    • Select IP Address or Interface or Default Gateway from the Select Type of Next Hop menu and specify IP address or interface accordingly. In the case of Interface, you can select an existing interface or create a new interface using the options for the interface field.

    • Optionally, select one or more options for the Attributes field to set attributes for the static route.

    • Select Add Item.

  • Optionally, enable Show Advanced Fields option in the Dc Cluster Group section and perform the following:

    • Select Member of DC Cluster Group from the Select DC Cluster Group menu.

    • In the Member of DC Cluster Group field, select a DC cluster group. You can also select Create New DC Cluster Group to create a new cluster group. Performing this adds this site to a DC cluster group, enabling full connectivity between the members of the group.

Figure
Figure: Site Local Network Configuration
  • Select Apply.

Note: For more information, see the Configure DC Cluster Group guide.

Step 4.2: Perform interface configuration.

Bootstrap interface configuration is applied by default, and it is based on the certified hardware.

Perform the following to apply custom interface configuration:

  • Select List of Interface from the Select Interface Configuration menu.

  • Click Configure. This opens another interface list configuration page.

  • Select Add Item in the List of Interface table.

  • Optionally, enter an interface description and select labels.

  • Select an option from the Interface Config Type menu, and set one of the interface types using the following instructions:

Ethernet Interface:
  • Select Ethernet Interface and click Configure. This opens Ethernet interface configuration page.

  • Select an option from the Ethernet Device menu using See Common Values. You can also type a custom name to set the device name while also adding it to the existing options.

  • Select Cluster, All Nodes of the Site or Specific Node from the Select Configuration for Cluster or Specific Node menu. In case of specific node, select the specific node from the displayed options of the Specific Node field. You can also type a custom name to set the device name while also adding it to the existing options.

  • Select Untagged or VLAN Id from the Select Untagged or VLAN tagged menu. In case of VLAN ID, enter the VLAN ID in the VLAN Id field.

  • Select an option from the Select Interface Address Method menu in the IP Configuration section. The DHCP Client is selected by default. In case you select a DHCP server, click Configure and set the DHCP server configuration per the options displayed on the DHCP server configuration page and click Apply. This example shows the interface as DHCP client for brevity.

  • Select site local outside or site local inside network from the Select Virtual Network menu in the Virtual Network section. Site Local Network (Outside) is selected by default.

  • Select if the interface is primary from the Select Primary Interface menu. Default is not a primary interface. Ensure that you set only one interface as primary.

  • Select Apply.

Dedicated Interface:
  • Select Dedicated Interface from the Interface Config Type menu.

  • Select a device name from the Interface Device menu using See Common Values. You can also type a custom name to set the device name while also adding it to the existing options.

  • Select Cluster, All Nodes of the Site or Specific Node from the Select Configuration for Cluster or Specific Node menu. In case of specific node, select the specific node from the displayed options from the Specific Node menu. You can also type a custom name to set the device name while also adding it to the existing options.

  • Select if the interface is primary in the Select Primary Interface field. Default is not a primary interface. Ensure that you set only one interface as primary.

  • Select Add Item.

  • Optionally, add more than one interface using the Add item option in the List of Interface page.

  • Select Apply.

Step 4.3: Perform security configuration.

In case of security configuration, the firewall policies and forward policies are disabled by default.

In the Security Configuration section, perform the following to apply network and forward policies:

  • From the Firewall Policy menu, optionally add a firewall policy by selecting Active Firewall Policies or Active Enhanced Firewall Policies. Select an existing firewall policy, or select Add Item to create and apply a firewall policy or Configure for an enhanced version.

  • From the Forward Proxy menu, select an option:

    • Select Enable Forward Proxy With Allow All Policy to allow all requests.

    • Select Enable Forward Proxy and Manage Policies to apply specific forward proxy policies. Select a forward proxy policy from the Forward Proxy Policies menu. You can also create and apply a new forward proxy policy using the Add Item option. You can apply more than one forward proxy policy using the Add Item option.

  • Optionally, you can configure global networks in the Global Connections section and BGP settings in the Advanced Configuration section. This example does not include the configuration for these two configuration options.

Step 5: Optionally, perform storage configuration.

Note: Storage configuration for an App Stack site is similar to that of a Fleet.

  • From the Select to Configure Storage menu, select Custom Storage Configuration.

  • Click Configure.

Figure
Figure: Configure Storage Devices
  • Select List of Storage Interface from the Select Storage Interface Configuration menu, and then click Configure to configure the interface.

Note: See Interfaces for instructions on creating network interfaces. See Multi Node Site Network Setup Using Fleet for instructions on how to configure networking using a Fleet for multi-node sites.

  • From the Select Storage Device Configuration menu, select List of Storage Devices.
Figure
Figure: Storage Devices Option
  • Click Add Item. This opens storage devices configuration page.
Figure
Figure: Storage Devices Parameters
  • Enter a name in the Storage Device field. Ensure that this name corresponds to the class in which the storage device falls. The classes are used by vK8s for storage related actions.

  • Select an option from the Select Storage Device to Configure menu and perform one of the following based on the option you chose:

NetApp Trident
  • Select an option from the Select NetApp Trident Backend menu. The ONTAP NAS is selected by default.

  • Select an option from the Backend Management LIF menu. The Backend Management LIF IP Address is selected by default. Enter an IP address for the backend management logical interface in the Backend Management LIF IP Address field. In case you select the name option, enter the backend management interface name.

Figure
Figure: NetApp Device
  • Select an option from the Backend Data LIF menu. The Backend Data LIF IP Address is selected by default. Enter an IP address for the backend data interface in the Backend Data LIF IP Address field. In case you select the name option, enter the backend data interface name.

  • Enter a username in the Username field. Click Configure for the Password field. Enter your password in the Secret page and click Blindfold. Wait for the blindfold process to complete encrypting your password, and then click Apply.

Figure
Figure: NetApp Device Password
  • Enter a certificate in the Client Certificate field. Click Configure for the Client Certificate field. Enter the text for your secret in the Secret page and click Blindfold. Wait for the blindfold process to complete encrypting your password, and then click Apply.

  • Enter a certificate in the Trusted CA Certificate field.

  • Enter CIDR for your K8s nodes in the Auto Export CIDRs field in case auto export policy is enabled for your storage device.

  • If you are configuring virtual storage, then in the Virtual Storage Pools section, enter a label and region for the storage, and click Add item one or more times to add pool labels and pool zones.

Figure
Figure: NetApp Device AutoExport CIDRs
  • Click Add Item.
Pure Storage Service Orchestrator
  • Enter a cluster identifier in the Cluster ID field. This is used to identify the volumes used by the datastore. Alphanumeric characters and underscores are allowed.
Figure
Figure: Pure Storage Orchestrator Device

Note: Unique cluster ID is required for multiple K8s clusters using the same storage device.

  • Click Add Item to add a flash array endpoint.
Figure
Figure: Pure Storage Flash Arrays
  • In the Management Endpoint IP Address field, enter an IP address.
Figure
Figure: Flash Array Endpoint
  • Click Configure under the API Token field.

  • Enter the token in the secret field and click Blindfold.

  • Click Apply after the blindfold encryption is completed.

  • Optionally, select labels for this endpoint.

  • Click Add Item.

  • Click Apply.

  • Click Configure under the Flash Blade field.

  • Click Add Item to add a flash blade endpoint.

Figure
Figure: Pure Storage Flash Blade
  • Enter the IP address in the Management Endpoint IP Address field.
Figure
Figure: Pure Storage Flash Blade Endpoint
  • Click Configure under the API Token field.

  • Enter the token in the secret field and then click Blindfold.

  • Click Apply after the blindfold encryption is completed.

  • Enter the IP address in the NFS IP Address field.

  • Optionally, add labels for this endpoint.

  • Click Add Item.

Note: You can change the management or NFS endpoints to specify management endpoint name or NFS DNS name.

  • Click Apply.
Custom Storage

The custom storage classes option is used for storage devices or external storages, which are deployed outside F5 Distributed Cloud Services. For instance, the option allows you to configure custom storage classes for AWS, GCP, etc.

  • Select Custom Storage from the Select Storage Device to Configure menu.

  • Click Add Item.

Figure
Figure: Custom Storage Device

Note: You can add multiple devices using the Add item option.

HPE Storage
  • Select HPE Storage for the Select Storage Device to Configure field.

  • In the Storage Server Name field, enter a name.

  • In the Storage Server IP address field, enter an IP address.

  • In the Storage server Port field, enter a port number.

  • In the Username field, enter the username used to connect to the HPE storage device.

  • To configure the password, click Configure. Then perform the following:

    • For the Blindfolded Secret option, complete the configuration by entering the secret text to blindfold.

    • For the Clear Secret option, enter the secret text in plaintext format or Base64.

    • Click Apply.

    • Click Apply again to complete configuration.

Storage Classes
  • From the Select Configuration for Storage Classes menu, select Add Custom Storage Class.

  • Click Add Item.

Note: You can use default storage classes supported in K8s or you can customize the classes. In case you are using default classes, ensure that the storage device names correspond to the K8s classes.

  • Enter a name for this storage class in the Storage Class Name field.

  • Check the Default Storage Class checkbox if this will be the default storage class.

  • Choose an option from the Select Storage Class Configuration drop-down menu.

NetApp Trident
  • Click Add Item.
Pure Storage Service Orchestrator
  • Select an option from the Backend menu. The block option is selected by default.

  • Optionally, enter IOPS and bandwidth limits in their respective fields.

  • Click Add Item.

Figure
Figure: Pure Storage Orchestrator Class
Custom Storage Class
  • Select Add Custom Storage Class from the Select Configuration for Storage Classes menu.

  • Click Add Item. This opens the Storage Class Parameters page.

  • Enter a name for the Storage Class Name field. This name will appear in K8s.

  • Enter a name in the Storage Device field. This is the storage device that will be used by this class, as entered in Step 3.

  • Optionally, enter a storage class description.

  • Optionally, check the Default Storage Class box to make this storage class the default for the K8s cluster.

  • Select Custom Storage from the Select Storage Class Configuration menu.

  • Enter the storage class YAML. It must have the configuration of:

          apiVersion: storage.k8s.io/v1
kind: StorageClass
...
        
Figure
Figure: Custom Storage Class Parameters
  • Enter a Reclaim Policy.

  • Optionally, check the Allow Volume Expansion box.

  • Optionally, enter generic/advanced parameters.

  • Click Add Item to save the storage class parameters.

  • Click Apply to save the custom storage configuration.

Step 6: Perform advanced configuration.
  • In the Advanced Configuration section, enable the Show Advanced Fields option.
Figure
Figure: Advanced Configuration
  • Optionally, select GPU Enabled or vGPU Enabled from the Enable/Disable GPU menu. This enables GPU capability for the site hardware.

  • Optionally, configure managed K8s for your site per the following guidelines:

    • Select Enable Site Local K8s API access from the Site Local K8s API access menu.

    • Click on the Enable Site Local K8s API access field and select a K8s cluster object from the list. You can also select Create new k8s Cluster to create and apply the K8s cluster object.

  • Optionally, enable logs streaming and either select a log receiver or create a new log receiver.

  • Optionally, select a USB device policy. You can deny all USB devices, allow all USB devices, or allow specific USB devices.

  • Optionally, specify a Distributed Cloud Services software version. The default is the Latest SW Version.

  • Optionally, specify an operating system version. The default is the Latest OS Version.

Note: The advanced configuration also includes managed K8s configuration. This is an important step if you want to enable managed K8s access. This is possible only at the time of creating the App Stack site and cannot be enabled later by updating the App Stack site.

  • To enable SR-IOV for your site:

    • From the SR-IOV interfaces menu, select Custom SR-IOV interfaces Configuration.

    • Click Add Item.

Figure
Figure: Enable Custom SR-IOV Interface
  • In the Name of physical interface field, enter a name for the physical adapter that will use a virtual function (VF).

  • In the Number of virtual functions field, enter the number of VFs used for the physical interface.

  • If you are using SR-IOV with either a virtual machine (VNF) or a Data Plane Development Kit (DPDK) application in a pod (CNF): In the Number of virtual functions reserved for vfio field, enter the number of virtual functions reserved for use with VNFs and DPDK-based CNFs. VNF (VM) needs to use vfio even if it is not running a DPDK application. The number of VFs reserved for vfio cannot be more than the total number of VFs configured.

Note: If you use SR-IOV with a virtual machine or DPDK-based pod, the network name in the VM/pod manifest file should have -vfio appended to the network name (subnet).

  • Click Apply to set the SR-IOV interface configuration.
Figure
Figure: Set Number of Virtual Functions

Note: After you enable this SR-IOV interface configuration option, your site will reboot if it is an existing site, provisioning new VFs. For new sites, the provisioning process will occur during initial site deployment. It may take several minutes for an existing site to reboot after enabling this feature.

Step 7: Complete creating the App Stack site.

Select Save and Exit to complete creating the App Stack site.

Note: You can also configure multiple interfaces for Virtual Machines (VM) or containers running in a K8s cluster within an App Stack Site. For instructions, see Create Workloads with Multiple Network Interfaces.


Perform Site Registration

After creating the App Stack site object in Console, the site shows up in Console with Waiting for Registration status. Install the nodes and ensure that the cluster name and host name for your nodes match with the App Stack site name and node name per the Basic Configuration section of App Stack site you configured.

Note: See Create VMware Site, Create KVM Site, and Create Baremetal Site for node installation instructions.

Perform registration per the following instructions:

  • Navigate to Manage > Site Management > Registrations.

  • Choose your site from the list of sites displayed under the Pending Registrations tab.

  • Approve the option (blue checkmark).

  • Ensure that the cluster name and hostname is matching with those of the App Stack site.

  • Select Accept to complete registration and the site turns online.


Concepts