Distributed Cloud App Stack

What is App Stack?

F5® Distributed Cloud App Stack is a SaaS-based offering to deploy, secure, and operate a fleet of applications across the distributed infrastructure in multi-cloud or edge. It can scale to a large number of clusters and locations with centralized orchestration, observability, and operations to reduce the complexity of managing a fleet of distributed clusters.

Using a distributed control plane running in our global infrastructure, App Stack delivers a logically centralized cloud that can be managed using industry-standard Kubernetes APIs. This control plane removes the overhead of many individually-managed Kubernetes clusters and allows the customer to automate application deployment, scaling, security, and operations across the entire deployment as a “unified cloud”. A large and arbitrary number of managed clusters can be logically grouped into a virtual site with a single Kubernetes management interface.

hl appstack 1
Figure: Highlevel View of App Stack Deployment

Our SaaS-based service also reduces the complexity of managing and operating App Stack services deployed within a single cloud, across multiple cloud sites, or edge sites as customers don’t have to worry about doing lifecycle management of clusters and their individual control and management plane. Since the identity, access management, policy, and configuration are centralized, any change is reflected across the entire deployment. All logging and metrics are centrally available for observability with API-based integrations to external tools like Datadog or Splunk from our centralized SaaS portal.

There are three modes of consuming App Stack services:

  1. Customer sites (cloud or edge) - F5 Distributed Cloud nodes can be deployed at the edge (on commodity hardware or our purpose-built hardware) or in any virtual machine in public or private cloud locations to run containerized or virtualized workloads. Multiple nodes will automatically cluster to scale-out the delivery of compute, storage, network, and security services within a single site. Multiple sites become part of the “unified cloud”

  2. F5 Distributed Cloud Global Infrastructure - the App Stack application runtime is available within our global network at every point of presence. In this case, our global infrastructure will be used to deploy the application workload and you can select all or a subset of our points of presence where this application workload needs to be deployed. In addition, by configuring network and security services accordingly, you can even expose the application across the global network but deploy the workload in a smaller number of sites and then scale the deployment as needed.

  3. Hybrid deployment (customer site and global infrastructure) - deploy the nodes within your cloud or edge site and they will automatically become part of a “unified cloud”. With the capability of using a simple click on the portal or an API, any of our global points of presence can become part of this “unified cloud” across which you can deploy, scale, and operate your application workloads.

The F5® Distributed Cloud Mesh functionality is integrated with App Stack to deliver all the connectivity and security services for workloads within the cluster and to connect these clusters using our global network backbone. App Stack is designed to make it extremely easy for anyone to deploy, scale, secure, and operate their application workloads in the cloud, network, or edge without worrying about scalability and operations of a modern and hybrid environment.

Why use App Stack?

There are three reasons why we believe that you should consider using App Stack for your next deployment in cloud or edge:

  1. Fleet management to simplify operations - Managing multiple clusters across heterogeneous environments is a burden on IT and DevOps teams as they have to deal with the complexity of resource management, service consistency, change management, and API integrations. They would prefer a more modern SaaS-based platform that centralizes orchestration, policy, security, and lifecycle management of application workloads and infrastructure across a distributed fleet of edge and cloud sites. Using centralized configuration management you get:

    • Zero-touch deployment, automated clustering, upgrades, and patches for infrastructure nodes
    • Single source of truth for configuration, policy-based control, and lifecycle management
    • Simplified deployment, scaling, and rollback of workloads across a group of clusters using virtual site concept
    • Unified policy and configuration model for applying changes across a group of clusters
    • Centralized and consolidated logging and monitoring across all the clusters with APIs to integrate with external tools
  2. Applications in the network or edge - As the internet evolves from primarily downstream consumption of content to upstream generation of data because of highly interactive applications and machine-to-machine traffic, there is a growing need to improve latency and performance by not only using techniques like TLS termination but also moving critical portions of the applications and API processing very close to the source of data. Our solution gives you the option to run applications directly at the edge location or closer to the edge in the network.

  3. Cluster scalability - Purpose-built and distributed control plane give the capability to scale to a very large number of distributed sites with multiple application services and dynamic policies. This is significantly different from other solutions that simply build a management layer on top of the existing Kubernetes control plane. Our distributed control plane with fleet management and true multi-tenancy removes the need to deploy and operate many Kubernetes control planes while preserving the usage of Kubernetes-API.

  4. Uniform Identity for Authentication, Authorization, and Secrets - deployments in heterogeneous environments presents the challenge of providing a uniform identity to workloads that can be used to authenticate and authorize access to resources (eg. APIs, secrets, keys, etc). App Stack provides a validated and secure way of giving a workload in a heterogeneous environment with PKI-based identity that can be used to decide what resources can be accessed by the application. These certificates are fully managed to ensure that you don’t have to worry about rotation, revocation, etc.

In addition, with Mesh functionality is integrated with App Stack, you get a truly distributed and multi-cluster service mesh with all the networking and security features.

Key App Stack Services

App Stack delivers a complete range of services to automate infrastructure and application deployment, scaling, security, and lifecycle management across a large number of distributed sites. A combination of services can be centrally deployed and operated using the Console and be seamlessly enabled across your cloud or edge site using Distributed Cloud Nodes or within our global infrastructure.

appstack services 1
Figure: App Stack Services

The capabilities for App Stack are grouped into two categories - Infrastructure Services and Application Services. The goal for infrastructure services is to create a homogenous and an abstracted layer across different types of infrastructure so that application services do not have to be exposed to the variances of the underlying infrastructure. Since we have already covered Mesh capabilities, please refer to key Mesh services for details on those services.

  1. Optimized Operating System - A consistent and efficient operating system is required to securely run the system microservices as well as customer workloads. This OS can be deployed in the cloud or edge and as a result, needs to support low memory footprint devices. This capability is the underpinning of the node that may be deployed as a VM in cloud or on physical hardware at the edge.

  2. Clustering - ability to scale compute and storage resources by seamlessly clustering multiple nodes within a single site gives the ability to request resources on-demand. Application workloads or cloud services (eg. Mesh and App Stack) can be easily auto-scaled as soon as a new node is added within the cluster. Using underlying auto-scaling capabilities provided by the cloud provider, we can scale the number of nodes within a site depending on demand and configuration constraints.

  3. Managed Kubernetes - all our infrastructure services are built using Kubernetes with enhancements for multi-tenancy, security, and ability to run virtual machines alongside docker containers. Our changes also allow us to mix critical and best-effort workloads on this platform with the ability to progressively roll-out changes and upgrade infrastructure services with minimal disruption to customer workloads. Any site consists of one or more nodes that always run three services at a very minimum - optimized operating system, clustering, and managed Kubernetes.

  4. Distributed Storage - container-native software-defined storage or the capability to attach cloud provider storage solutions like EBS using Kubernetes PVCs gives the ability to scale storage across the nodes within the cluster. This capability gives you the ability to run stateful applications without worrying about managing distributed storage services. There are additional capabilities (on the roadmap) for taking snapshots, scheduled backups to the cloud-based object store, and encryption for storage gives services that are typically required for enterprise production deployments and secure distributed data.

  5. Distributed Infrastructure Management - This gives you the capability to manage infrastructure services deployed across many locations as a fleet. You can group all or a subset of individual locations as a fleet and then perform operations on this fleet object - zero-touch deployment, upgrade and patch operating system or infrastructure software, apply configuration changes, deploy new services across the entire or sub-set of infrastructure fleet. This creates a significant simplification for policy and configuration management of a large fleet of infrastructure components.

  6. Continuous Deployment and Verification - deployment of applications in a cluster is typically done using continuous deployment tools like spinnaker, harness, etc. Using the capabilities of a logical grouping of distributed sites with a single Kubernetes interface across these clusters, the same continuous deployment tools can continue to be used while reaping the benefits of rolling out upgrades and changes to many locations. In addition, the system collects logs and metrics from all the clusters on which we continuously perform anomaly detection. This information can be used as input to continuous deployment systems for roll-backs, creating alerts, or integrating with external continuous verification tools as they need a rich data-source to generate meaningful insights.

  7. Identity and Secrets Management - Uniform identity is essential to authentication and authorization in a distributed system. This becomes challenging when different systems are used to create, assign, and manage identity across different providers. Without an identity that is accepted across different systems, authorization and policy controls are not possible to implement reliably. As a result, App Stack gives every app instance its own PKI identity that is issued and maintained through the entire life-cycle of the application. This identity is not only used for RBAC and network policies but also used for granting access to secrets and keys. In addition, our novel and cryptographically secure double-blinding system stores customer secrets without any concerns of losing valuable information if hacked as they are never stored in the clear. It is also possible for the customer to integrate this solution with their existing enterprise products like Hashicorp Vault or Cyberark.

  8. Container Security - isolation and protection of services against malicious and/or erroneous conditions need to be handled by any application management system. We allow customers to maintain their own registries that periodically perform vulnerability scans to ensure that application software is compliant to their requirements. In addition, the shared host needs to be protected from container vulnerabilities by using a VM-like isolation boundary. We are working on providing this capability as part of our roadmap.

  9. Distributed Application Management - This gives the capability to manage application services deployed across many locations as a logical group. You can group all or a subset of locations as a virtual-site and then perform operations and policy changes on these virtual-sites. This includes the capability to perform operations like workload deployment, scaling changes, application upgrades, roll-back, connectivity policy, security policy across all the locations within the virtual-site. This significantly simplifies configuration and policy management for large-scale application deployment and operations.

  10. Observability - Very detailed metrics, logs, requests, notifications are centrally collected from every site to provide rich observability across application, infrastructure, network, and security services across the entire system. These metrics are used to provide a holistic view of application health, service connectivity, API requests, infrastructure resource consumption. This gives the ability to easily debug and trace issues across the system while the centralized SaaS-based service can be used to integrate logs and metrics with external performance management systems like Datadog, Splunk, etc.

  11. Multi-tenancy - the entire system was built from the ground-up for multi-tenancy with complete isolation across tenants. Every service - compute, storage, and the network is isolated across tenants with IAM and RBAC rules to control access to resources. Within a tenant, there are multiple namespaces that can be individually assigned to different teams, groups, or even developers and User access management and RBAC controls will decide how they access a namespace, an API, or a service.

F5 Distributed Cloud Node

F5 Distributed Cloud services in private, public, or edge cloud sites are consumed by deploying one or more “Nodes”. Depending on service configured (eg. Mesh or App Stack), appropriate software capabilities get enabled on the Node. There is no need for the customer to deploy the Nodes within our global infrastructure as all the Mesh and App Stack services are already available in our multi-tenant infrastructure.

These nodes are software appliances that can be deployed in a VM or Bare Metal in the cloud/edge or is already integrated in our F5 Distributed Cloud hardware for the edge. These Nodes are always under the management of our SaaS service and can be deployed in the cloud by directly downloading from the cloud marketplace, or downloading the software image directly from our portal, or using our portal to automatically deploy in the cloud.

Multiple nodes can cluster (within a site/location) to provide additional processing capability and zero-touch provisioning provides the capability of securely on-boarding the Node. At a minimum, Node comes bundled with the following features from App Stack Infrastructure services - Optimized Operating System, Clustering, and Distributed Infrastructure Management.

Example Use Cases for App Stack?

App Stack has been built in such a way that it can be deployed in many different ways to solve different use-cases:

  1. Edge Application Management - for distributed deployment across multiple edge sites. Use the SaaS to deploy (or using F5 Distributed Cloud Hardware) nodes within each site and they will automatically and redundantly connect to the global backbone to create a “logical cloud”. Enable App Stack features for a fully functional and distributed cloud that can be managed using our distributed application management service that provides Kubernetes APIs with additional capabilities like enterprise-grade security, centralized observability, uniform identity, distributed secrets + key management, and a globally distributed service mesh across these sites and the back-end running in public or private cloud. As you transition from development to test and production, different teams can easily access Console to add security controls and networking policies to ensure compliance without affecting developers and DevOps workflows.

  2. Multi-Cloud Application Management - for deployment of application clusters across one or more cloud regions and cloud providers. Use the SaaS to deploy one or more nodes (cluster) within each location and enable App Stack features on each of these clusters for a fully functional and distributed cloud. These clusters can be managed using our distributed application management service that provides Kubernetes APIs with additional capabilities like enterprise-grade security, centralized observability, uniform identity across cloud providers, unified secrets + key management, and rich networking services. In addition, Mesh will provide a globally distributed service mesh across these clusters for cross-cluster routing, VPNs, service discovery, health checks, API routing, application security, unified policy, and observability.

  3. Network Edge Applications to deliver immersive experiences - utilizing our global infrastructure, you can deploy applications closer to the users and/or data-generating machines in order to reduce network costs, improve application experience, de-duplicate data, etc before sending to cloud. Depending on your needs, there are two capabilities offered from the network cloud - Kubernetes APIs for containerized or virtual machine workloads and Java v8 functions to process REST or gRPC APIs. In addition, by configuring network and security services appropriately, you can expose application across the global network but deploy the workload in a smaller number of sites and then scale the deployment as your needs evolve over time.