Create App Stack Site

Objective

This document provides instructions on how to install F5® Distributed Cloud single-node or multi-node F5 Distributed Cloud App Stack sites on clusters of private Data Center (DC) devices. An App Stack site is a Distributed Cloud Customer Edge (CE) site engineered specifically for the purpose of managing DC clusters. To know more about Distributed Cloud sites, see Site. The F5 Distributed Cloud Platform also supports creating physical K8s (also called managed K8s) clusters for managing your applications on the created App Stack sites. See Create and Deploy Managed K8s for instruction on creating the managed K8s clusters.

Clusters of private DCs that are setup on geographically separated locations require an established direct communication link with each member of the cluster. An App Stack site helps achieving that while ensuring safety and reliability. You can define a DC cluster group in an App Stack site to which member nodes can be added dynamically. Such added members become part of the cluster group and automatically get full mesh connectivity with all other members of the group.

Using the instructions provided in this document, you can deploy a single-node or multi-node App Stack site on your private DC, define a cluster group, and add members to the cluster group. You can also enable the local API access for the managed K8s running on the sites so that it can be used like regular K8s.


App Stack Site vs Other CE Sites

The App Stack site differs from the regular Distrubted Cloud CE site that can be deployed with App Stack. An App Stack site simplifies the task of managing distributed apps across DC clusters while offering the option to establish full mesh connectivity among themselves. In the case of regular sites with App Stack functionality, you will need to explicitly create and manage site mesh groups for that purpose. With an App Stack site, you just need to add the site to the DC cluster group and connectivity is automatically established.

Also, an App Stack site provides support to manage local K8s while controlling communication between services of different namespaces. While regular Distributed Cloud sites provide virtual K8s (vK8s), the vK8s is per namespace per site, and managed local K8s is deployed across all namespaces of an App Stack site with a single kubeconfig. Therefore, if you require the ability to manage distributed apps across your private DC clusters with full mesh connectivity, an App Stack site with managed K8s is useful. For other requirements, you can use regular CE sites.


Reference Architecture

This reference architecture shows the recommended practice for setting up geographically distributed DC clusters using App Stack sites, privately connecting them using DC cluster group, and enabling managed K8s for deploying apps in those clusters.

The following image shows a sample reference architecture of the App Stack DC cluster site.

app stack site ref
Figure: App Stack DC Cluster Site Reference Architecture

The DC clusters represent App Stack sites. These consist of interfaces towards storage network, interfaces in LACP towards DC switches, and dedicated or non-dedicated management interfaces. An App Stack site supports storage classes with dynamic PVCs. Bonded interfaces are also supported with bond interfaces for regular use in the case of a dedicated management interface. In the case of non-dedicated management interfaces, the fallback interface for internet connectivity in a bonded interface which must be assigned with a static IP address. The example image also shows TOR switches that can be connected with a single bond interface using MC-LAG or with 2 bond interfaces at L3 level. An App Stack site also supports establishing BGP peering with the TORs.

Bonding of interfaces is supported in LACP active/active or active/backup modes.

The following image shows sample reference deployment of an App Stack DC cluster sites that are deployed at different geographies.

dc cluster sites
Figure: App Stack DC Cluster Site Reference Deployment

The apps deployed in the managed K8s clusters in these sites can directly communicate with each other in CI/CD use cases or users can obtain access to managed K8s API using the Kubeconfig.

The following image shows service interaction for service deployed in different DC cluster sites:

srv inter dc
Figure: Service Communication Between Clusters of Different DCs

The following image shows service interaction for service deployed in the same DC cluster site:

srv intra dc
Figure: Service Communication Between Clusters of Same DC

The following image shows service deployment in active/active and active/standby mode for remote clients:

srv act std comb
Figure: Service Deployment in Active-Active and Active-Standby Modes

Note: For instruction on accessing managed K8s clusters and deploying apps, see Create and Deploy Managed K8s guide.


Prerequisites

Note: If you do not have an account, see Create an Account.

  • One or more physical DC devices consisting of interfaces with internet reachability.

Note: An App Stack site is supported only for the F5 IGW, ISV, and Dell Edger 640 Series devices.

  • Resources required per node: Minimum 4 vCPUs and 14 GB RAM.

Deploy Site

Perform the steps provided in the following chapters to deploy an App Stack site.

Create App Stack Site Object

Log into F5 Distributed Cloud Console and perform the following steps:

Step 1: Start creating an App Stack site object.
  • In the Cloud and Edge Sites service, navigate to Manage -> Site Management and select App Stack Sites.

  • Click Add App Stack Site to open the App Stack site configuration form.

nav voltstacksite
Figure: Navigate to App Stack Site Configuration

  • Enter a name in the metadata section for your App Stack site object.
  • Optionally select labels.

Note: Adding a fleet label to App Stack site is not supported.

  • Optionally enter a description.
Step 2: Set the fields for the basic configuration section.
  • Select an option from the list of options for the Generic Server Certified Hardware field. The F5 Volterra ISV 8000 is selected by default.
  • Enter the names of master nodes in the Master Nodes field. Click Add item to add more than one entry.

Note: Either a single node or 3 master nodes are supported.

  • Optionally, enter the names of worker nodes in the Worker Nodes field. Click Add item to add more than one entry.

  • Optionally, enter the following fields:

    • Geographical Address - This derives geographical coordinates
    • Coordinates - Latitude and longitude

HostCnf
Figure: App Stack Site Basic Configuration Section

Step 3: Configure bond interfaces.

Go to Bond Configuration section and do the following:

  • Select Configure Bond Interfaces for the Select Bond Configuration field.

bond main 1
Figure: Bond Configuration Section

  • Click Configure under the Configure Bond Interfaces option to open bond interface configuration page.

bond devices
Figure: Bond Devices Section

  • Click Add Item under the Bond Devices List field in the Bond Devices section.

  • Click on the Bond Device Name field on the Bond Devices List page and select a name from the list of options. You can also type a custom name and click Add item to set the device name while also adding it to the existing options.

  • Click on the Member Ethernet Devices field and select the ethernet devices that are part of this bond. Use Add item option to add more devices.

  • Click on the Select Bond Mode field to update the bonding mode. LACP is selected by default for the bonding mode with the default LACP packet interval as 30 seconds. You can set the bond mode to Active/Backup to set the bond members function in active and backup combination.

bond dev list
Figure: Bond Devices List

  • Click Add Item.

Note: Use the Add item option in the Bond Devices List to add more than one bond device.

  • Click Apply in the Bond Devices page to apply bond configuration to the App Stack site configuration.
Step 4: Perform network configuration.

Go to Network Configuration section. Select Custom Network Configuration for the Select to Configure Networking field. Click Configure under the Custom Network Configuration option to open network configuration page.

Perform the following:

Step 4.1: Perform site local network configuration.

Site local network is applied with default configuration. Perform the following set of steps to apply custom configuration:

  • Select Configure Site Local Network for the Select Configuration For Site Local Network field. Click Configure under the Configure Site Local Network field.

  • Optionally, set labels for the Network Labels field in the network metadata section.

  • Optionally, click Show Advanced Fields in the Static Routes section and select Manage Static Routes for the Manage Static Routes field. Click Add Item under Manage Static Routes field and do the following:

    • Enter IP prefixes for the IP Prefixes field. These all prefixes will be mapped to the same next-hop and attributes.
    • Select IP Address or Interface or Default Gateway for the Select Type of Next Hop field and specify IP address or interface accordingly. In the case of interface, you can select an existing interface or create a new interface using the options for the interface field.
    • Optionally, select one or more options for the Attributes field to set attributes for the static route.
    • Click Add Item.
  • Optionally, enable Show Advanced Fields option in the Dc Cluster Group section and do the following:

    • Select Member of DC Cluster Group for the Select DC Cluster Group option.

    • Click on the Member of DC Cluster Group field and select a DC cluster group. You can also select Create new dc cluster group to create a new cluster group. Performing this adds this site to a DC cluster group, enabling full connectivity between the members of the group.

      Note: The Not a Member option is default.

site local net
Figure: Site Local Network Configuration

  • Click Apply.
Step 4.2: Perform interface configuration.

Bootstrap interface configuration is applied by default, and it is based on the certified hardware. Perform the following set of steps to apply custom interface configuration:

  • Select List of Interface for the Select Interface Configuration field. Click Configure under the List of Interface field. This opens another interface list configuration page.
  • Click Configure under the List of Interface field.
  • Click Add Item in the List of Interface table.
  • Optionally enter an interface description
  • Optionally select labels
  • Select an option for the Interface Config Type field and set one of the interface types using the following instructions:
Ethernet Interface:
  • Select Ethernet Interface and click Configure under the Ethernet Interface field. This opens ethernet interface configuration page.
  • Select an option from the list of options of Ethernet Device field. You can also type a custom name to set the device name while also adding it to the existing options.
  • Select Cluster, All Nodes of the Site or Specific Node for the Select Configuration for Cluster or Specific Node field. In case of specific node, select the specific node from the displayed options of the Specific Node field. You can also type a custom name to set the device name while also adding it to the existing options.
  • Select Untagged or VLAN Id for the Select Untagged or VLAN tagged field. In case of VLAN Id, enter the VLAN Id in the VLAN Id field.
  • Select an option for the Select Interface Address Method field in the IP Configuration section. The DHCP client is selected by default. In case you select a DHCP server, click Configure under the DHCP Server option and set the DHCP server configuration as per the options displayed on the DHCP server configuration page and click Apply. This example shows the interface as DHCP client for brevity.
  • Select site local outside or site local inside network for the Select Virtual Network field of the Virtual Network section. Site local outside network is selected by default.
  • Select if the interface is primary in the Select Primary Interface field. Default is not a primary interface. Ensure that you set only one interface as primary.
  • Click Apply.
Dedicated Interface:
  • Select Dedicated Interface for the Interface config Type field.
  • Select a device name from the displayed list for the Interface Device field. You can also type a custom name to set the device name while also adding it to the existing options.
  • Select Cluster, All Nodes of the Site or Specific Node for the Select Configuration for Cluster or Specific Node field. In case of specific node, select the specific node from the displayed options of the Specific Node field. You can also type a custom name to set the device name while also adding it to the existing options.
  • Select if the interface is primary in the Select Primary Interface field. Default is not a primary interface. Ensure that you set only one interface as primary.
  • Click Add Item.

Note: You can add more than one interface using the Add item option in the List of Interface form.

Step 4.3: Perform security configuration.

In case of security configuration, firewall policies and forward policies are disabled. Go to Security Configuration section and do the following to apply network and forward policies:

  • Select Active Firewall Policies for the Manage Firewall Policy field. Click on the List of Firerwall Policy field and select a firewall policy from the displayed list of options. You can also create and apply a new firewall policy using the Create a new firewall Policy option. You can apply more than one firewall policy using the Add item option.

  • Select one of the following options for the Manage Forward Proxy field:

    • Select Enable Forward Proxy With Allow All Policy to allow all requests.
    • Select Enable Forward Proxy and Manage Policies to apply specific forward proxy policies. Select a forward proxy policy from the displayed list of options for the Forward Proxy Policies field. You can also create and apply a new foward proxy policy using the Create a new Forward Proxy Policy option. You can apply more than one forward proxy policy using the Add item option.

Optionally, you can configure global networks in the Global Connections section and BGP settings in the Advanced Configuration section. This example does not include the configuration for these two for the purpose of brevity.

Step 5: Perform storage configuration.

Optionally, specify storage configuration for your site.

Note: Storage configuration in App Stack site is similar to that of fleet.

  • Select Custom Storage Configuration for the Select to Configure Storage field in the storage configuration section.

  • Click Configure under the Custom Storage Configuration option.

    storage intf
    Figure: Interfaces for Storage Devices

  • Select List of Storage Interface for the Select Storage Interface Configuration field and then click Configure to configure the interface.

Note: See Interfaces for instructions on creating network interfaces. See Multi Node Site Network Setup Using Fleet for instructions on how to configure networking using fleet for multi node sites.

  • Select List of Storage Devices for the Select Storage Device Configuration field.

storage dev option
Figure: Storage Devices Option

  • Click Add Item under the List of Storage Devices field. This opens storage devices configuration.

storage dev params
Figure: Storage Devices Parameters

  • Enter a name for the Storage Device field. Ensure that this name corresponds to the class in which the storage device falls. The classes are used by vK8s for storage related actions.
  • Select an option for the Select Storage Device to Configure field and perform one of the following based on the option you chose:
NetApp Trident

netapp dev 1
Figure: NetApp Device Backend LIFs

  • Select an option for the Select NetApp Trident Backend field. The ONTAP NAS is selected by default.

  • Select an option for the Backend Management LIF field. The Backend Management LIF IP Address is selected by default. Enter an IP address for the backend management logical interface in the Backend Management LIF IP Address field. In case you select the name option, enter the backend management interface name.

  • Select an option for the Backend Data LIF field. The Backend Data LIF IP Address is selected by default. Enter an IP address for the backend data interface in the Backend Data LIF IP Address field. In case you select the name option, enter the backend data interface name.

netapp dev 2
Figure: NetApp Device Password and Certificates

  • Enter a username in the Username field. Click Configure for the Password field. Enter your password in the Secret page and click Blindfold. Wait for the Blindfold to complete encrypting your password and click Apply.

  • Enter a certificate in the Client Certificate field. Click Configure for the Client Certificate field. Enter your the text for your secret in the Secret page and click Blindfold. Wait for the Blindfold to complete encrypting your password and click Apply.

  • Enter a certificate in the Trusted CA Certificate field. Click Configure for the Trusted CA Certificate field. Enter your the text for your secret in the Secret page and click Blindfold. Wait for the Blindfold to complete encrypting your password and click Apply.

netapp dev 3
Figure: NetApp Device AutoExport CIDRs

  • Enter CIDR for your K8s nodes in the Auto Export CIDRs field in case auto export policy is enabled for your storage device.

  • If you are configuring virtual storage, then in the Virtual Storage Pools section, enter a label and region for the for the storage, and click Add Item one or more times to add pool labels and pool zones.

  • Click Apply.

Pure Storage Service Orchestrator

pure device
Figure: Pure Storage Orchestrator Device

  • Enter a cluster identifier in the Cluster ID field. This is used to identify the volumes used by the datastore. Alphanumeric characters and underscores are allowed.

    Note: Unique cluster ID is required for multiple K8s cluster using the same storage device.

  • Click Configure under the Flash Arrays field.

flash array
Figure: Pure Storage Flash Arrays

  • Click Add Item to add a flash array endpoint.

    flash array endpoint
    Figure: Pure Storage Flash Array endpoint

    • Enter an IP address in the Management Endpoint IP Address field.
    • Click Configure under the API Token field. Enter the token in the secret field and click Blindfold. Click Apply after the Blindfold encryption is completed.
    • Optionally select labels for this endpoint.
    • Click Add Item.
  • Click Configure under the Flash Blade field.

flash blade
Figure: Pure Storage Flash Blade

  • Click Add Item to add a flash blade endpoint.

flash blade endpoint
Figure: Pure Storage Flash Blade endpoint

  • Enter the IP address in the Management Endpoint IP Address field.
  • Click Configure under the API Token field. Enter the token in the secret field and click Blindfold. Click Apply after the Blindfold encryption is completed.
  • Enter the IP address in the NFS IP Address field.
  • Optionally add labels for this endpoint.
  • Click Add Item.

Note: You can change the management or NFS endpoints to specify management endpoint name or NFS DNS name.

  • Click Apply.
Custom Storage

The custom storage classes option is used for storage devices or external storages, which are deployed outside of F5 Distributed Cloud Services. For instance, the option allows you to configure custom storage classes for AWS, GCP, etc.

custom storage device
Figure: Custom storage device

  • Select Custom Storage for the Select Storage Device to Configure field.
  • Click Add Item.

Note: You can add multiple devices using the Add item option.

  • Select Add Custom Storage Class for the Select Configuration for Storage Classes field, and then click Add Item.

** Note**: You can use default storage classes supported in K8s or you can customize the classes. In case you are using default classes, ensure that the storage device names correspond to the K8s classes.

  • Enter a name for this storage class in the Storage Class Name field.
  • Check the Default Storage Class checkbox if this will be the default storage class.
  • Choose an option in the Select Storage Class Configuration drop down menu.
NetApp Trident

netapp class
Figure: NetApp Class

  • Click Add Item.
Pure Storage Service Orchestrator

pure class
Figure: Pure Storage Orchestrator Class

  • Select an option for the Backend field. The block option is selected by default.
  • Optionally enter IOPS and bandwidth limits in their respective fields.
  • Click Add Item.
Custom Storage Class

storage class parameters
Figure: Storage class parameters

  • Select Add Custom Storage Class for the Select Configuration for Storage Classes field.

  • Click Add Item under the List of Storage Classes field. This opens the Storage Class Parameters page.

  • Enter a name for the Storage Class Name field. This name will appear in K8s.

  • Enter a name in the Storage Device field. This is the storage device that will be used by this class, as entered ini Step 3.

  • Optionally enter a storage class description.

  • Optionally check the Default Storage Class box to make this storage class the default for the K8s cluster.

  • Select Custom Storage for the Select Storage Class Configuration field.

  • Enter the storage class YAML. It must have the configuration of

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    ...
    
  • Enter a Reclaim Policy.

  • Optionally check the Allow Volume Expansion box.

  • Optionally enter generic/advanced parameters.

  • Click Add Item.

  • Click Add Item to save the storage class parameters.
  • Click Apply to save the custom storage configuration.
Step 6: Perform advanced configuration.
  • Go to Advanced Configuration section and enable Show Advanced Fields option.

app stack advanced
Figure: Advanced Configuration

  • Optionally, select GPU Disabled for the Enable/Disable GPU field. This enables GPU capability for the site hardware.
  • Optionally, configure managed K8s for your site as per the following guidelines:
    • Select Enable Site Local K8s API access for the Site Local K8s API access field.
    • Click on the Enable Site Local K8s API access field and select a K8s cluster object from the list of the fields. You can also select Create new k8s cluster option to create and apply the K8s cluster object.
  • Optionally enable logs streaming and either select a select a log receiver or create a new log receiver.
  • Optionally select a USB device policy. You can deny all USB devices, allow all USB devices, or allow specific USB devices.
  • Optionally specify a Distributed Cloud software version. The default is the Latest SW Version.
  • Optionally specify an operating system version. The default is the Latest OS Version.

Note: The advanced configuration also includes managed K8s configuration. This is an important step in case you want to enable managed K8s access. This is possible only at the time of creating the App Stack site and cannot be enabled later by updating App Stack site.

Step 7: Complete creating the App Stack site.

Click Save and Exit to complete creating the App Stack site.


Perform Site Registration

After creating the App Stack site object in Console, the App Stack site shows up in Console with Waiting for Registration status. Install the nodes and ensure that the cluster name and host name for your nodes match with the App Stack site name and node name as per the Basic Configuration section of App Stack site you configured. Perform registration as per the following instructions:

  • Navigate to Manage > Site Management > Registrations in the system namespace.
  • Choose your site from the list of sites displayed under Pending Registrations tab. Click approve sign and start approval.
  • Ensure that the cluster name and hostname is matching with those of App Stack site.
  • Click Accept to complete registration and the site turns online.

Concepts