App Delivery Network (ADN)

Objective

This guide provides instructions on how to deploy and secure network edge applications using F5® Distributed Cloud Console and F5 Distributed Cloud Mesh.

The following image shows the steps to deploy network edge applications:

Seq
Figure: Steps to Deploy and Secure Network Edge Applications

The following images shows the topology of the example for the use case provided in this document:

Top
Figure: Network Edge Applications Sample Topology

Using the instructions provided in this guide, you can deploy your web application in the Distributed Cloud virtual K8s (vK8s) clusters, deliver the application using load balancer, advertise the application services on the Distributed Cloud global network (exposing to internet), protect the application using Distributed Cloud security features, and monitor the application using Console monitoring features.

The example shown in this guide deploys a micro services application called hipster-shop across the Distributed Cloud global network using Distributed Cloud vK8s. The application consists of the following services:

  • frontend
  • cartservice
  • productcatalogservice
  • currencyservice
  • paymentservice
  • shippingservice
  • emailservice
  • checkoutservice
  • recommendationservice
  • adservice
  • cache

Prerequisites


Configuration

The use case provided in this guide deploys the web application across all of the Distributed Cloud Regional Edge (RE) sites in the vK8s clusters. It then exposes the application to the Distributed Cloud global network using Distributed Cloud load balancer and secures it using the Distributed Cloud security features. The following actions outline the activities in deploying the web app and securely expose it to the internet.

  1. A Distributed Cloud vK8s cluster is created and using its kubeconfig and K8s manifest, the web application is deployed in the vK8s clusters in all RE sites.

  2. The frontend service of the application needs to be externally available. Therefore, a HTTPS load balancer is created for each cluster with the required origin pools such as endpoint, health check, and cluster. The appropriate route and advertise policy are enabled for exposing to internet. Also, the subdomain is delegated to Distributed Cloud Services to manage the DNS and certificates.

  3. A WAF configuration is applied to secure the externally available load balancer VIPs.

  4. The deployed services of the K8s application are monitored using the observability features such as load balancer and service mesh monitoring.

Step 1: Deploy K8s App

The following video shows the site deployment workflow:

Perform the following steps to deploy the web application in vK8s clusters:

Step 1.1: Log into the Console and create a namespace.

This example creates a sample namespace called tutorial-hipster namespace.

  • Select the Administration service.
  • Select Personal Management > My namespaces.
ns nav
Figure: Navigate to Manage Namespaces
  • Click Add namespace and enter a name for your namespace. Click Save changes to complete creating the namespace.
ns create
Figure: Create Namespace
Step 1.2: Create vK8s cluster and download its kubeconfig.
  • Change to the Distributed Apps service.

  • Click on the application namespace option on the namespace selector. Select the namespace created in previous step from the namespace dropdown list to change to that namespace.

  • Select Applications in the configuration menu and Virtual K8s in the options pane.

    vk8s nav
    Figure: Navigate to vK8s Creation
  • Click Add Virtual K8s and enter a name for your vK8s cluster.

vk8s
Figure: Select Virtual Site for vK8s
  • Click the Select vsite ref button and select ves-io-all-res. Click the Select Vsite Ref button at the bottom to apply the virtual site to the vK8s configuration.

  • Click Save and Exit to start creating the vK8s clusters in all RE sites.

  • Once the vK8s object is created, click ...> Kubeconfig for it and enter an expiration date to download its kubeconfig file.

Step 1.3: Deploy the web application in all Distributed Cloud RE sites.

To deploy the web application in a K8s cluster, the following are required:

  • Kubeconfig of the K8s cluster. For this, use the vK8s kubeconfig downloaded in previous step.
  • Manifest file of your web application. Download the sample this example uses and edit its fields as per your application.

Enter the following command to deploy the application:

          kubectl --kubeconfig ves_adn_adn.yaml --namespace adn apply -f kubernetes-manifests.yaml

        

Note: You can download the kubectl application here: kubeclt.

This completes deployment of application across all RE sites.


Step 2: Deliver K8s App

Delivering the application requires creating load balancer and origin pool for the services. Origin pools consist of endpoints and clusters. Also routes and advertise policies are required to make the application available to the internet. In addition, this example shows creation of app type object and associate it with the load balancer for API discovery. This use case also shows how to delegate your subdomain to Distributed Cloud Services to manage the DNS and certificates for your domain.

The following video shows the application delivery workflow:

Perform the following steps for creating origin pool and load balancer for your application:

Create HTTP Load Balancer.
  • Select the Multi-Cloud App Connect service.
  • Select the namespace created in step 1.1 where we previously deployed our application onto our vk8.
  • Navigate to Manage in the left menu. Select Load Balancers > HTTP Load Balancers from the options. Click Add HTTP Load Balancer to start load balancer creation.
add load balancer
Figure: Add HTTP Load Balancer
  • Enter a name for the load balancer.
  • Under Domains and LB Type in the List of Domains field, enter the domain(s) that will be matched to this load balancer.
  • Select the Load Balancer Type. For this quick start, we will select HTTPS with Automatic Certificate.
load balancer basic config
Figure: HTTP Load Balancer Basic Configuration
  • Scroll down to the Origins section and click Add Item to set up an origin pool.

  • Use the Origin Pool pull-down menu to find and click the Create new origin pool button.

  • Enter a name for the new origin pool.

  • In the Origin Servers section, click Add Item.

    • In the Select Type of Origin Server field, select K8s Service Name of Origin Server on given Sites.
    • Enter a name in the Service Name field in the form <unique-name>.<namespace>. For the quick start example, we'll enter frontend.adn.
    • Set this as a Virtual Site in the Site or Virtual Site field.
    • Set the virtual site to the same site used when you created your virtual kubernetes cluster. In this example, we used ves-io-all-res.
    • For the Select Network on the site field, select vK8s Networks on Site.
    • Click Add Item to save the origin server.
    lb origin server
    Figure: Load Balancer Origin Server
  • Enter the port number that the origin server is using.

  • Click Continue at the bottom of the page to create the origin pool.

orig pool basic config
Figure: Origin Pool Basic Configuration
  • Click Add Item to add this pool to the HTTP load balancer.
  • Click Save and Exit to complete the HTTP load balancer.
  • Press the Refresh button to update the TLS info column in the load balancer table. After a few minutes, the TLS info will be certified, and we'll be able to access our application through the domain we specified.
load balancer finished
Figure: Load Balancer Created

Step 3: Secure K8s App

Securing the web application requires you to setup ingress filtering using BGP ASN sets, javascript challenge, DDoS protection using rate limiting, and WAF.

The following video shows the workflow of securing the K8s application:

The examples in this chapter demonstrate how to setup the java script challenge and WAF to the load balancer to complete securing the application.

Note: Javascript challenge enforces the users to send requests through the browser preventing automated attacks.

Create a Web Application Firewall (WAF) and apply to the load balancer.
  • Select the Web App & API Protection service.

  • Select the namespace used to create the http load balancer previously.

  • Navigate to Manage > Load Balancers > HTTP Load Balancers.

  • Click ...>Manage Configuration for your load balancer, and then click Edit Configuration in the top right to edit the load balancer's configuration.

  • Scroll down or click on Security Configuration in the left menu to go to the security configuration section.

  • Select Enable in the Web Application Firewall (WAF) field.

  • In the Enable field, click the Create new App Firewall button (use the pull-down menu to see the button.

    WAF
    Figure: Create New Web Application Firewall

Perform the configuration using the following guidelines:

  • Enter a name for the Distributed Cloud WAF in the Metadata section.
  • Set the Enforcement Mode to Blocking to protect the website. The other default option is Monitoring, which will log malicious activity, but it will not block any traffic.
  • Leave the Detection Settings at their defaults.
  • Click Continue to complete creating the WAF.
WAF
Figure: Web Application Firewall Creation
  • Click Save and Exit to add the WAF to the load balancer configuration.

Step 4: Observe K8s App

You can monitor the deployed K8s application using the Console monitoring.

The following video shows the workflow of using Console to monitor your application:

Step 4.1: Open the application site map.
  • Log into the Console and select the Distributed Apps service.
  • Change to your namespace.
  • Click App Site Map in the Sites section to get a global view of all your apps.
app site map
Figure: App Site Map Global View
  • Click on a site in the map to get details for that site. You can see the application's health overview along with a summary of requests and errors.
app site map detail
Figure: App Site Detail
Step 4.2: Open the load balancer dashboard.
  • In the sidebar menu, select Load Balancers > HTTP Load Balancers to see a list of HTTP load balancers.
  • Find your load balancer in the list and select Performance Monitoring. This shows the performance dashboard showing the overall status such as a health score, the number of origin servers, end-to-end latency, requests per second, and throughput information.
lbdb
Figure: Load balancer Dashboard View
  • Click Metrics tab check the metrics such as request rate, error rate, latency, and throughput.
lbme
Figure: Load balancer Metrics View
  • Click Traffic tab to see the request rate and overall throughput between source, load balancer, and origin server.
lbme
Figure: Load balancer Traffic View
  • Click Origin Servers tab check the origin servers and the associated details like requests, errors, latency, RTT, etc.
Step 4.3: Open the application service mesh.
  • In the sidebar menu, select Service Mesh to see a list of service meshes.
  • Click on the More button associated with the service mesh object for your application to open its service graph. The service graph shows the service mesh graph for your application services.
smgraph
Figure: Service Mesh Service Graph View
  • Click on an endpoint to see more details and health information.
app srv ep details
Figure: Service Mesh Service Endpoint Details
  • Click Explore Service to see more details

  • Click the API Endpoints tab to display the distribution map of all requests hitting the app via local answer and shows the percentage split of each path per segment.

    sm eps
    Figure: Service Mesh Endpoints View for all Endpoints
  • Click the Dashboard tab to see latency distribution per service as well as request rate, latency, and throughput of services.

    sm db
    Figure: Service Mesh Dashboard View

Concepts