App Delivery Network (ADN)
On This Page:
This guide provides instructions on how to deploy and secure network edge applications using F5® Distributed Cloud Console and F5 Distributed Cloud Mesh.
The following image shows the steps to deploy network edge applications:
The following images shows the topology of the example for the use case provided in this document:
Using the instructions provided in this guide, you can deploy your web application in the Distributed Cloud virtual K8s (vK8s) clusters, deliver the application using load balancer, advertise the application services on the Distributed Cloud global network (exposing to internet), protect the application using Distributed Cloud security features, and monitor the application using Console monitoring features.
The example shown in this guide deploys a micro services application called hipster-shop across the Distributed Cloud global network using Distributed Cloud vK8s. The application consists of the following services:
F5 Distributed Cloud Console SaaS account.
Note: If you do not have an account, see Create an Account.
Distributed Cloud vesctl utility.
Note: See vesctl for more information.
Self-signed or CA-signed certificate for your application domain.
The use case provided in this guide deploys the web application across all of the Distributed Cloud Regional Edge (RE) sites in the vK8s clusters. It then exposes the application to the Distributed Cloud global network using Distributed Cloud load balancer and secures it using the Distributed Cloud security features. The following actions outline the activities in deploying the web app and securely expose it to the internet.
A Distributed Cloud vK8s cluster is created and using its kubeconfig and K8s manifest, the web application is deployed in the vK8s clusters in all RE sites.
The frontend service of the application needs to be externally available. Therefore, a HTTPS load balancer is created for each cluster with the required origin pools such as endpoint, health check, and cluster. The appropriate route and advertise policy are enabled for exposing to internet. Also, the subdomain is delegated to Distributed Cloud Services to manage the DNS and certificates.
A WAF configuration is applied to secure the externally available load balancer VIPs.
The deployed services of the K8s application are monitored using the observability features such as load balancer and service mesh monitoring.
Step 1: Deploy K8s App
The following video shows the site deployment workflow:
Perform the following steps to deploy the web application in vK8s clusters:
Step 1.1: Log into the Console and create a namespace.
This example creates a sample namespace called
- Select the
Add namespaceand enter a name for your namespace. Click
Save changesto complete creating the namespace.
Step 1.2: Create vK8s cluster and download its kubeconfig.
Change to the
Click on the application namespace option on the namespace selector. Select the namespace created in previous step from the namespace dropdown list to change to that namespace.
Applicationsin the configuration menu and
Virtual K8sin the options pane.
Add Virtual K8sand enter a name for your vK8s cluster.
Select vsite refbutton and select
ves-io-all-res. Click the
Select Vsite Refbutton at the bottom to apply the virtual site to the vK8s configuration.
Save and Exitto start creating the vK8s clusters in all RE sites.
Once the vK8s object is created, click
Kubeconfigfor it and enter an expiration date to download its kubeconfig file.
Step 1.3: Deploy the web application in all Distributed Cloud RE sites.
To deploy the web application in a K8s cluster, the following are required:
- Kubeconfig of the K8s cluster. For this, use the vK8s kubeconfig downloaded in previous step.
- Manifest file of your web application. Download the sample this example uses and edit its fields as per your application.
Enter the following command to deploy the application:
kubectl --kubeconfig ves_adn_adn.yaml --namespace adn apply -f kubernetes-manifests.yaml
Note: You can download the kubectl application here: kubeclt.
This completes deployment of application across all RE sites.
Step 2: Deliver K8s App
Delivering the application requires creating load balancer and origin pool for the services. Origin pools consist of endpoints and clusters. Also routes and advertise policies are required to make the application available to the internet. In addition, this example shows creation of app type object and associate it with the load balancer for API discovery. This use case also shows how to delegate your subdomain to Distributed Cloud Services to manage the DNS and certificates for your domain.
The following video shows the application delivery workflow:
Perform the following steps for creating origin pool and load balancer for your application:
Create HTTP Load Balancer.
- Select the
Multi-Cloud App Connectservice.
- Select the namespace created in step 1.1 where we previously deployed our application onto our vk8.
- Navigate to
Managein the left menu. Select
HTTP Load Balancersfrom the options. Click
Add HTTP Load Balancerto start load balancer creation.
- Enter a name for the load balancer.
Domains and LB Typein the
List of Domainsfield, enter the domain(s) that will be matched to this load balancer.
- Select the
Load Balancer Type. For this quick start, we will select
HTTPS with Automatic Certificate.
Scroll down to the
Originssection and click
Add Itemto set up an origin pool.
Origin Poolpull-down menu to find and click the
Create new origin poolbutton.
Enter a name for the new origin pool.
Origin Serverssection, click
- In the
Select Type of Origin Serverfield, select
K8s Service Name of Origin Server on given Sites.
- Enter a name in the
Service Namefield in the form <unique-name>.<namespace>. For the quick start example, we'll enter
- Set this as a
Virtual Sitein the
Site or Virtual Sitefield.
- Set the virtual site to the same site used when you created your virtual kubernetes cluster. In this example, we used
- For the
Select Network on the sitefield, select
vK8s Networks on Site.
Add Itemto save the origin server.
- In the
Enter the port number that the origin server is using.
Continueat the bottom of the page to create the origin pool.
Add Itemto add this pool to the HTTP load balancer.
Save and Exitto complete the HTTP load balancer.
- Press the
Refreshbutton to update the
TLS infocolumn in the load balancer table. After a few minutes, the TLS info will be certified, and we'll be able to access our application through the domain we specified.
Step 3: Secure K8s App
The following video shows the workflow of securing the K8s application:
The examples in this chapter demonstrate how to setup the java script challenge and WAF to the load balancer to complete securing the application.
Create a Web Application Firewall (WAF) and apply to the load balancer.
Web App & API Protectionservice.
Select the namespace used to create the http load balancer previously.
HTTP Load Balancers.
Manage Configurationfor your load balancer, and then click
Edit Configurationin the top right to edit the load balancer's configuration.
Scroll down or click on
Security Configurationin the left menu to go to the security configuration section.
Web Application Firewall (WAF)field.
Enablefield, click the
Create new App Firewallbutton (use the pull-down menu to see the button.
Perform the configuration using the following guidelines:
- Enter a name for the Distributed Cloud WAF in the
- Set the
Blockingto protect the website. The other default option is
Monitoring, which will log malicious activity, but it will not block any traffic.
- Leave the
Detection Settingsat their defaults.
Continueto complete creating the WAF.
Save and Exitto add the WAF to the load balancer configuration.
Step 4: Observe K8s App
You can monitor the deployed K8s application using the Console monitoring.
The following video shows the workflow of using Console to monitor your application:
Step 4.1: Open the application site map.
- Log into the Console and select the
- Change to your namespace.
App Site Mapin the
Sitessection to get a global view of all your apps.
- Click on a site in the map to get details for that site. You can see the application's health overview along with a summary of requests and errors.
Step 4.2: Open the load balancer dashboard.
- In the sidebar menu, select
HTTP Load Balancersto see a list of HTTP load balancers.
- Find your load balancer in the list and select
Performance Monitoring. This shows the performance dashboard showing the overall status such as a health score, the number of origin servers, end-to-end latency, requests per second, and throughput information.
Metricstab check the metrics such as request rate, error rate, latency, and throughput.
Traffictab to see the request rate and overall throughput between source, load balancer, and origin server.
Origin Serverstab check the origin servers and the associated details like requests, errors, latency, RTT, etc.
Step 4.3: Open the application service mesh.
In the sidebar menu, select
Service Meshto see a list of service meshes.
Click on the
Morebutton associated with the service mesh object for your application to open its service graph. The service graph shows the service mesh graph for your application services.
- Click on an endpoint to see more details and health information.
Explore Serviceto see more details
API Endpointstab to display the distribution map of all requests hitting the app via local answer and shows the percentage split of each path per segment.
Dashboardtab to see latency distribution per service as well as request rate, latency, and throughput of services.