Infrastructure & App Management

Objective

This guide provides instructions on how to deploy Edge infrastrucre and perform application management using VoltConsole and VoltMesh.

The steps to deploy and configure infrastructure and app management are:

EcnsSeq
Figure: Steps to Deploy and Configure Edge Infrastructure and App Management

The following images shows the topology of the example for the use case provided in this document:

TopEcns
Figure: Infrastructure and App Management Sample Topology

Using the instructions provided in this guide, you can provision and register the edge site, deploy apps to the edge site, connect the edge app to the cloud, secure the edge app, and operate the edge sites. Operating includes the full lifecycle management of the edge sites and applications.


Prerequisites

  • VoltConsole SaaS account.

    Note: If you do not have an account, see Create a Volterra Account.

  • Amazon Web Services (AWS) account.

    Note: This is required to deploy a Volterra site.

  • Volterra Industrial Gateway 5008 device.

  • Intel NUC as commodity device with atleast 4 VCPU and 8G memory.

    Note: See Hardware Installation.

  • Volterra vesctl utility.

    Note: See vesctl for more information.

  • Docker.

  • The balenaEtcher software to flash the Volterra software image on to a USB drive.

  • A secret and policy document for encrypting your certificates and secrets using Voltrra Blindfold.

    Note: See Blindfold for more information.

  • Self-signed or CA-signed certificate for your application domain.


Configuration

The use case provided in this guide deploys Volterra sites on the IGW and Intel NUC. The sites are added to Volterra fleet and further segmented using labels and virtual sites. After that, apps are deployed to these sites and secure connectivity between the edge apps and cloud apps is configured. The use case also demonstrates how to operate heterogenious devices by performing software upgrades on the sites and observing the health of them.

The following list outlines the sequence of activities performed for this use case:

  1. Volterra software is installed on the edge devices and registered on VoltConsole.

  2. Volterra fleet is created with the required network configurations and fleet label is applied to the site to make it part of the fleet. Also further sites are further segmented using labels and virtual hosts.

  3. Apps are deployed to the edge fleet using the following methods:

    • VoltConsole
    • Kubectl
    • Using CI/CD pipelines
  4. The edge and cloud apps are connected using the Volterra virtual host and associated ADC objects. The edge site is secured using network policies. The edge apps are secured using service policies and app firewall.

  5. The fleet of edge devices is operated like a single unified logical cloud. As part of that, the application version is upgraded on a sub-segment of edge sites. The operating system upgrade as well as an infrastructure software version upgrade is performed remotely.

  6. The heterogenious set of sites are monitored for health and alerts using Volterra observability features.

The use case for this document assumes that an application called as Hipster Webapp is deployed in an AWS EKS in Amazon VPC. Also, a VoltMesh node is deployed in the same VPC with the site name as hipster-webapp-west. The K8s namespace name is hipster.

The application consists of the following services:

  • frontend
  • cartservice
  • productcatalogservice
  • currencyservice
  • paymentservice
  • shippingservice
  • emailservice
  • checkoutservice
  • recommendationservice
  • adservice
  • cache

Note: Ensure that you keep the Amazon Elastic IP VIPs ready for later use in configuration.

Step 1: Provision

The following video shows the site provisioning workflow:

You can use any of the supported hardware devices. However, in case of Volterra IGW or ISV devices, the software is pre-installed and you can power on, connect through Ethernet or Wi-Fi, and perform registration.

Perform the following steps to provision Volterra site on the commodity device:

Step 1.1: Download the Volterra software image.

Download the certified hardware image from the Volterra Node page.

Step 1.2: Create a site token.

Log into VoltConsole and create a site token as per the instructions in the Create a Site guide.

Note: You can also use an existing site token. In case you do not have any token, create one for using later for registration.

Step 1.3: Install the software on the device.
  • Install the Volterra software using the downloaded image on the commodity device as per the instructions in the Install Volterra Node guide.

  • Log into your device terminal and perform initial configuration. Initial configuration includes setting site token, cluster name, and set default values for rest of the options such as network configuration.

Step 1.4: Perform site registration.
  • Log into VoltConsole and select Manage from the configuration menu and Site Management -> Registrations in the options.

  • Click Pending Registrations tab and find the registration request for your device. Click Approve.

  • Click Accept to confirm.

Note: Check for your device status in the Other Registrations tab. The ONLINE status indicates that the site is provisioned and ready to use.

Step 1.5: Deploy web application and Volterra node in the Amazon VPC.

Perform the steps mentioned in the Step 1: Deploy Site chapter of the Secure Kubernetes Gateway guide to deploy the web application.

Note: This step performs automatic site registration.


Step 2: Deploy

This step creates labels and virtual sites to segment the sites and deployes the frontend services of the web application to these sites. The use case covered in this document presents two segments of sites consisting of Volterra IGW and Intel NUC, representing heterogeneous segments. The Volterra IGW and the Intel NUC represented as production and staging environments respectively and labeled accordingly. Also, virtual sites are created with these labels to group the sites into staging and production to efficiently deploy the service in case there are large number of sites in each segment.

The following video shows the segmentation and service deployment workflow:

Perform the following steps to label the sites, segment them using virtual sites, and deploy the frontend service.

Step 2.1: Log into VoltConsole, create labels, and add the labels to the edge sites.

Assigning custom labels requires you to create a known key and add the key with custom values to your sites.

Step 2.1.1: Create a known key.
  • Change to the shared namespace and select Manage -> Labels -> Known Keys. Click Add known key to open the known key creation form.
  • Enter a string in the Label key (Required) field. This example adds env to indicate the environment as the key.
  • Click Add key to complete creating the key.
known key
Figure: Create Known Key
Step 2.1.2: Assign the label to the edge sites.

The label is added to the sites in the (key,value) format.

  • Change to system namespace and select Sites -> Site List. Find your IGW site and click ... -> Edit to open site edit form.
  • Click on the Labels field and select the key you created from the list of keys displayed. Type a value for the key. This example adds prod as the value representing the production environment.
  • Click Save changes.
label igw
Figure: Assign Label to the Site
  • Repeat the above steps to add the env key with stage value to the Interl NUC site.
Step 2.2: Create virtual sites to segment sites.

Create virtual sites to differentiate segement of sites representing production environment from the segment representing the staging environment.

  • Change to the shared namespace and navigate to Manage->Virtual Sites. Click Add virtual site.
  • Enter a name and select CE for the Site Type field.
  • Click on the Selector Expression field and select env as the key and type prod as the value. Click Assign Custom Value 'prod'.
  • Click Continue to create the virtual site. This creates the virtual site that groups all sites representing production environment.
fleet conf
Figure: Fleet Basic Configuration
  • Repeat the above steps to create another virtual site with the key env and value stage for the Selector Expression field.
Step 2.3: Create a namespace and add vK8s in it.
  • Click to open the namespaces dropdown and click Manage namespaces.
  • Click Add namespace, enter a name for your namespace, and click Save.
  • Change to the created namespace, navigate to Applications -> Virtual K8s, and click Add virtual K8s.
  • Enter a name and click Select vsite ref. Select the virtual sites created in the previous step and click Select vsite ref again to add the virtual sites to the vK8s.
  • Click Add virtual K8s to complete creating the vK8s.
  • Wait for the vK8s creation to complete and click ... -> Kubeconfig to download the vK8s kubeconfig file.
Step 2.4: Deploy the webapp and frontend services using the vK8s.

You can deploy in any of the following 3 methods:

Using VoltConsole
  • Click on the vK8s object to load the vK8s deployments view.
  • Click Add Deployment and add the manifest of the webapp. Click Save. The deployments get created.
deployments
Figure: Deployment Using VoltConsole
  • Open the Services view, click Add service, and add the manifest of the frontend service. Click Save.
services
Figure: Services View
  • Open the Pods view and verify that the pods are created and their status is Running.
pods
Figure: Pods View
Using Kubectl
  • Set the downloaded kubeconfig of the vK8s to the KUBECONFIG environment variable.
          export KUBECONFIG=<vK8s-kubeconfig>

        
  • Prepare a combined manifest file for both service and deployment.

  • Deploy the service using kubectl.

          kubectl apply -f <deployment-service>.yaml

        
  • Verify in the VoltConsole that the deployments are created and pods are running.

Note: You can use the annotation ves.io/virtual-sites to specify the production sites and staging sites in your deployment file. For example, the ves.io/annotations: staging-sites can be used to deploy the service in the staging sites.

Using CI/CD Pipeline
  • Open your CI configuration file and add the deployment targets with the kubeconfig settings and kubectl commands. This example shows sample CI targets entry for GitLab:
          deploy_staging:
  image: <image>
  stage: main
  script:
    - echo \$KUBECONFIG_BASE64 | base64 -d > kubeconfig
    - kubectl --kubeconfig kubeconfig apply -f stage-deployment-service.yml
  retry: 2

deploy_prod:
  image: <image>
  stage: main
  script:
    - echo \$KUBECONFIG_BASE64 | base64 -d > kubeconfig
    - kubectl --kubeconfig kubeconfig apply -f prod-deployment-service.yml
  retry: 2

        
  • Add the annotations ves.io/annotations: staging-sites and ves.io/annotations: prod-sites in the staging production deployment coniguration files respectively.

  • Save the configuration files and trigger the CI/CD pipelines. This example shows the GitLab CI/CD pipelines:

ci cd pipelines
Figure: Deployment of Service Using CI/CD
  • Verify in the VoltConsole that the service is deployed in the vK8s.

Step 3: Connect & Secure

Connecting the edge app and cloud app requires configuring discovery of cloud app and establishing reachability for edge app on the edge site using virtual hosts. The edge site is then secured using the network layer security policies. Edge app is secured using the application layer security policies and app firewalls.

Note: This chapter provides the details for configuring the required components for virtual host. For detailed instructions on creation of virtual host, see Create Virtual Host.

The following video shows the service discovery and loadbalancer creation workflow:

The backend web app services are already deployed on the EKS cluster on the Amazon VPC. These services require discovering and advertising reachability to the edge sites.

Perform the following to connect and secure the edge network:

Step 3.1: Log into VoltConsole and create service discovery.
  • Navigate to Manage -> Site Management -> Discovery. Click Add discovery.
  • Set a name for the discovery object and select Kubernetes for the Type field.
  • Select Virtual Site for the Where field.
  • Click Select ref, select backend-sites as the virtual site, and click Select ref.
  • Select Site Local Network for the Network Type field.
  • Select K8s for the Discovery Service Access Information field and select Kubeconfig for the Oneoff field. Click on Kubeconfig.
  • Select Clear Secret for the Secret info field.
  • Apply Base64 encoding to the K8s kubeconfig and save the output.
  • Enter the secret in the Location field.

Note: Append the string:/// string to the copied secret before entering in the Location field.

  • Enter Base64 for the Secret Encoding field and click Apply.
  • Click Add discovery to complete creating the discovery.

Note: You can also encrypt your kubeconfig using the Volterra Blindfold. For more information, see Blindfold App Secrets.

Step 3.2: Change to your app namespace and create endpoints.

Select Manage->Endpoints. Click Add endpoint and enter the configuration as per the following guidelines:

  • Enter a name in the Name field.

  • Enter Virtual Site for the Where field and select the production virtual site created for the Select ref field. This example selects prod-sites.

  • Select Site Local Network for the network type.

  • Select Service Selector Info for Endpoint Specifier field.

  • Select Kubernetes for the Discovery field and Service Name for the Service field.

  • Enter frontend-prod.edge-apps as the service name. Here edge-apps is the K8s namespace name.

  • Enter 80 for the Port field.

  • Click Add endpoint to create endpoint.

  • Repeat the steps to create another endpoint for staging sites. This example sets staging-sites as the virtual site and frontend-stage.edge-apps as the service name.

Step 3.3: Create clusters.

Select Manage->Clusters. Click Add cluster and enter the configuration as per the following guidelines:

  • Enter a name in the Name field.

  • Select the production sites endpoint created for the Select endpoint field.

  • Select the healthcheck object created for the Select healthcheck field.

  • Select Distributed for the Endpoint Selection field. Set 0 for the Connection Timeout and HTTP Idle Timeout fields.

  • Click Add cluster.

  • Repeat the above steps to create another cluster for staging sites.

Step 3.4: Add routes towards the created clusters.

Select Manage -> Routes. Enter a name and click Add route. Enter the configuration as per the following guidelines:

  • Click Add match. Select ANY for the HTTP Method field and Regex for the Path Match field. Enter (.*?) for the Regex field and click Add match.

  • Select Destination List for the Route action field and click Add destination. Click Select cluster and select the cluster object created for production sites. Set 0 for the Weight field. Click Select cluster and Add destination to add the cluster.

  • Set 0 for the Timeout field and click Add route.

  • Click Add route again to create the route.

  • Repeat the steps to create another route with the cluster created for the staging sites.

Step 3.5:Add advertise policy.
  • Select Manage -> Advertise Policies. Click Add advertise policy and select Virtual Network for the Where field
  • Click Select ref and select public network. Click Select ref to add the network.
  • Select 80 as the port.
  • Click Add advertise policy to complete creating the advertise policy.

Note: This advertise policy can be used for both production and staging set of sites.

Step 3.6: Encrypt the private key of the certificate using the Volterra Blindfold.

Use the public key and policy document obtained. This example shows the sample of generating a secret for your application domain. Store the output to a file.

          
vesctl request secrets encrypt --policy-document secure-kgw-demo-policy-doc --public-key hipster-co-public-key tls.key > tls.key.secret


        

Note: The tls.key is the private key of the certificate you generated.

Step 3.7: Add virtual hosts.

Select Manage -> Virtual Hosts. Click Add virtual host and set the configuration as per the following guidelines:

  • Enter name, application domain, and your proxy type. This sample uses HTTP PROXY as the proxy type and hipster-shop.edge-apps-prod.playground.helloclouds.app as the domain.

  • Click Select route and select previously defined route for the production sites.

  • Click Select advertise policy and select previously created advertise policy.

  • Click Add virtual host.

  • Repeat the steps to create another virtual host for the staging environment.

At this point, the apps are accessible through the configured domains. You can verify the same using curl or browser.

Step 3.8: Verify that the egress traffic from one of the pods is allowed.

The following example commands checks the egress traffic to IP address 8.8.8.8, DNS resolution using Google DNS server, and to GitLab.

          kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- ping 8.8.8.8 -c 2
kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- nslookup github.com 8.8.4.4
kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- wget --server-response -O /dev/null https://gitlab.com


        
Step 3.9: Create network policies to allow egress to specific destination and deny all other destinations.
Step 3.9.1: Create network policy rules.

Create network policy rule to allow egress communication to Google DNS server.

  • Change to the system namespace and select Security -> Advanced -> Network Policy Rules. Click Add network policy rule.

  • Set a name for the rule and select Allow for the Action field.

  • Select IP Prefix for the Remote Endpoint field and enter 8.8.8.8/32 for the Prefix field.

  • Select udp for the Protocol field and 53 for the Port Ranges field.

  • Click Add network policy rule to complete creating the rule.

  • Click Add network policy rule to create another rule. Set a name, select Allow for Action, and tcp for protocol. Click Add network policy rule to create the rule.

Note: The rest of the egress traffic gets implicitly denied.

Step 3.9.2: Create network policies.
  • Select Security -> Advanced -> Network Policies. Click Add network policy.

  • Set a name for the policy and click Select egress rule.

  • Select the previously created rules and click Select egress rule to apply the rules.

  • Click Add network policy to complete creating the policy.

Step 3.9.3:Create network policy set.
  • Select Security -> Advanced -> Network Policy Sets. Click Add network policy set.

  • Set a name for the policy set and click Select policy.

  • Select the previously created policy and click Select policy to apply the policy.

  • Click Add network policy set to complete creating the policy set.

Step 3.10: Create service policies to allow egress to specific domain and deny all other destinations.
Step 3.10.1: Create service policy rules.

Create network policy rule to allow egress communication to GitHub.

  • Select Security -> Advanced -> Service Policy Rules. Click Add service policy rule.

  • Set a name for the rule and select Allow for the Action field.

  • Click Add exact value in the Domain Matcher section and enter github.com in the Exact Values field.

  • Click Add service policy rule to complete creating the rule.

Note: The rest of the egress traffic gets implicitly denied.

Step 3.10.2: Create service policies.
  • Select Security -> Advanced -> Service Policies. Click Add service policy.

  • Set a name for the policy and click First Rule Match for the Rule Combining Algorithm field.

  • Click Select Rule and select the the created service policy rule. Click Select rule to apply the rule.

  • Click Add service policy to complete creating the policy.

Step 3.10.3:Create service policy set.
  • Select Security -> Advanced -> Service Policy Sets. Click Add service policy set.

  • Set a name for the policy set and click Select policy.

  • Select the previously created policy and click Select policy to apply the policy.

  • Click Add service policy set to complete creating the policy set.

Step 3.11:Create network firewall and add to your fleet of sites.
  • Select Security -> Firewall -> Network Firewall. Click Add network firewall.

  • Select Forward Proxy Service Policy Set in the Select Forward Policy Configuration field. Select the service policy created in the previous step for the Forward Proxy Service Policy Set field.

  • Select Network Policy Set (Legacy mode) in the Select Network Policy Configuration field. Select the network policy created in the previous step for the Network Policy Set (Legacy mode) field.

  • Click Continue to create the network firewall.

  • Navigate to Manage -> Fleets. Select your fleet from the list of displayed fleets and click ... -> Edit to open the fleet edit form.

  • Click Select network firewall and select the created firewall. Click Select network firewall and Save changes to apply the firewall to the fleet of sites.

Step 3.12:Verify that the policies are effective.

Note: Setup the KUBECONFIG, pod, and namespace environment variables before entering the kubectl commands.

Verify that the egress traffic to GitHub and DNS resolution using the Google DNS is allowed:

          
kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- nslookup github.com 8.8.8.8

kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- git clone https://github.com/kubernetes-up-and-running/kuard.git

        

Verify that the rest of the egress traffic is dropped.

          
kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- nslookup github.com 8.8.4.4

kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- git clone https://gitlab.com/graphviz/graphviz.git

kubectl --kubeconfig edge-apps-kubeconfig -n edge-apps exec -t frontend-578b65cdff-r48bf -- ping 8.8.8.8


        
Step 3.13:Create application firewall and add to your virtual hosts.
  • Change to your application namespace and select Security -> App Firewall -> App Firewall. Click Add app firewall.

  • Select a name and click Add firewall to create the firewall in the default blocking mode.

  • Navigate to Manage->Virtual Hosts and select your virtual host from the displayed list. Click ... -> Edit to open the virtual host edit form.

  • Scroll down and click WAF Config to open the WAF config form. Select WAF for the WAF Config field and click Select WAF. Select the created app firewall and click Select WAF to apply. Click Apply to add the WAF configuration.

  • Click Save changes.


Step 4: Operate

This chapter shows how perform upgrades to your apps on the edge sites and also perform infrastructure upgrades to the sites. Also, operating your sites includes observability for your deployed applications and this example shows how to use VoltConsole to monitor your sites and applications for health and security.

The following video shows the workflow of operating your edge sites:

Perform the following to carry out upgrades and observe the sites:

Step 4.1: Perform upgrades to applications.

The DevOps teams usually perform staging upgrades and when that is successful, the production environment is updated.

Step 4.1.1: Upgrade from your CI/CD pipelines.
  • Verify your current application version. This example shows verification using kubectl
          kubectl describe deployments frontend-staging | grep label

        
  • Open your CI/CD configuration file for staging.
  • Update the version of application in the app.kubernetes.io/version field.
  • Commit the changes and trigger the pipeline.
  • Wait for the pipeline to successfully complete and verify the version using kubectl.
  • Repeat the steps for production environment with the frontend-prod as the deployment.
Step 4.1.2: Upgrade from VoltConsole.
  • Log into VoltConsole and change your application namespace.

  • Navigate to Applications -> Virtual K8s. Click on your vK8s object to open its deployments view.

  • Click ... -> Edit against your deployments to open the YAML/JSON edit form and update the app.kubernetes.io/version field value.

  • Click Save.

  • Click Services tab to load the services view and click ... -> Edit for your service to open the YAML/JSON edit form.

  • Update the app.kubernetes.io/version field value and click Save.

Step 4.2: Perform infrastructure and operating system software upgrades

You can perform Volterra software upgrades or OS upgrades from the VolConsole. You can carry out the upgrades for each site individually or for multiple sites using the fleet. The OS upgrade is an A/B upgrade. Firstly, newest version gets downloaded, gets installed in another partition, and boots up in the new partition created. The healthcheck is carried out and if the health is good, the older version is marked as inactive and continues with the newest version.

Step 4.2.1:Upgrade individual sites.
  • Log into VoltConsole and navigate to Sites -> Site List. Find your site from the displayed list.
  • Click on Upgrade button in the SW version (Current/Status) for Volterra software upgrade. Click Upgrade in the confirmation window. The field shows In progress during the upgrade.
  • Wait for the upgrade to be completed. The SW version (Current/Status) field shows Successful when the upgrade is completed successfully.
  • Repeat the same steps in the OS Version (Current/Status) field to perform OS upgrades.
Step 4.2.2:Perform Volterra software upgrade to multiple sites using the fleet.
  • Log into VoltConsole and navigate to Manage -> Fleets. Click Add fleet to open fleet creation form.
  • Set a name to your fleet and enter a label value for Fleet Label Value field.
  • Set a version in the Software Version field. An example version is crt-2020627-310.
  • Click Add fleet to create fleet. You can verify your fleet labels in the shared namespace by navigating to Manage -> Labels -> Known Keys.
  • In the system namespace, navigate to Sites -> Site List and click ... -> Edit to open site edit form.
  • Click in the Labels field and select ves.io/fleet as the label and select the value as that of the fleet you created.
  • Scroll down and select Fleet Version Overrides for the Site Software Version Overrides field.
  • Click Save.
  • Repeat the same for the other sites for which you want to apply this software version.
  • Verify that the sites now show the applied software version.

Note: You can also edit an existing fleet to apply upgrades and apply the filter Site Software Override Type on the site list to display the sites that have software version set by their fleet.

Step 4.2.3:Perform OS upgrade to multiple sites using the fleet.
  • Log into VoltConsole and navigate to Manage -> Fleets. Find the fleet you created in previous step and click ...->Edit to open the fleet configuration form.
  • Scrolldown to the Operating System Version and enter a version value
  • Click Save.
  • Verify that the sites that are part of this fleet now show the applied OS version.
Step 4.3: Monitor the upgrades using the notifications and alerts.
  • Navigate to Notifications -> Audit logs. A list of logs is displayed.
  • Verify the Request field for the fleet and site upgrades.
Step 4.4: Monitor site health.
  • Navigate to Sites -> Site List and click on your site from displayed sites. This opens the site dashboard. Check the system health, metrics, alerts, and software version section to inspect health, resource consumption, and upgrade information.
  • Click Site Status to check the software and OS version status and upgrade information.
  • Change to your application namespace and navigate to Applications -> Virtual Sites. A list of cards is displayed with overview information for each virtual site.
  • Hover over a site (represented by a blue dot) in your virtual site entry to display the CPU and memory consumption of that site.

Concepts