Create and Deploy Managed K8s

Objective

This document provides instructions on how to create a managed Kubernetes (K8s) cluster and deploy it on the F5® Distributed Cloud App Stack site. Managed K8s cluster is similar in principle to regular K8s, and you can use command-line interface (CLI) tools like kubectl to perform operations that are common to regular K8s. F5 Distributed Cloud Platform provides a mechanism to easily deploy applications using managed K8s across App Stack sites that form DC clusters. To learn more about deploying an App Stack site, see Create App Stack Site.

Using the instructions provided in this guide, you can create a managed K8s cluster, associate it with a App Stack site, and deploy applications using its kubeconfig file.

Note: Managed K8s is also known as physical K8s.


Virtual K8s and Managed K8s

You can use both Distributed Cloud Virtual K8s (vK8s) and managed K8s for your applications. However, the major difference between Distributed Cloud vK8s and managed K8s is that you can create managed K8s only on App Stack sites. The vK8s can be created in all types of Distributed Cloud sites, including Distributed Cloud Regional Edge (RE) sites. Also, you can deploy managed K8s across all namespaces of a App Stack site and manage the K8s operations using a single kubeconfig. The vK8s is created per namespace per site. While you can use vK8s and managed K8s in the same site, operations on the same namespace using both are not supported. See Restrictions for more information.


Reference Architecture

The image below shows reference architecture for the CI/CD jobs for app deployments to production and development environments. The production and development environments are established in different App Stack sites. Each site has CI/CD jobs and apps deployed in separate namespaces. The git-ops-prod and git-ops-dev namespaces have CI/CD jobs, such as ArgoCD, Concourse CI, Harbor, etc. These are integrated using the in-cluster service account. Services such as ArgoCD UI can be advertised on a site local network and users can access it for monitoring the CD dashboard.

Figure: Reference Deployment Using Managed K8s
Figure: Reference Deployment Using Managed K8s

This image shows different types of service advertisements and communication paths for services deployed in different namespaces in the sites:

Figure: Intra-Namespace and Remote Communication for Managed K8s
Figure: Intra-Namespace and Remote Communication for Managed K8s

Note: You can disable communication between services of different namespaces.


Prerequisites


Restrictions

The following restrictions apply:

  • Using vK8s and managed K8s in the same site is supported. However, if the namespace is already used local to a managed K8s cluster, then any object creation in that namespace using vK8s is not supported for that site. Conversely, if a namespace is already used by vK8s, then operations on local managed K8s cluster are not supported.

  • Managed K8s is supported only for a App Stack site and not supported for other sites.

  • Managed K8s can be enabled by applying it before provisioning the App Stack site. It is not supported for enabling by updating an existing App Stack site.

  • Managed K8s cannot be disabled once it is enabled.

  • In case of managed K8s, Role and RoleBinding operations are supported via kubectl. However, ClusterRoleBinding, PodSecurityPolicy, and ClusterRole are not supported for kubectl. These can be configured only through F5 Destributed Cloud Console.

Note: As BGP advertisement for VIPs is included by default in Cloud Services-managed K8s, the NodePort service type is not required. Kubernetes service type LoadBalancer and NodePort do not advertise the service outside the K8s cluster. These function in the same way as the ClusterIP service type.


Configuration

Enabling a managed K8s cluster on a App Stack site requires you to first create it and apply it during App Stack site creation. You can also create a newly managed K8s cluster as part of the App Stack site creation process. This documentation shows creating a K8s cluster separately and attaches it to a App Stack site during creation.

Create Managed K8s Cluster

Perform the following steps to create a managed K8s cluster:

Step 1: Start K8s cluster object creation.
  • Log into Console.

  • Click Cloud and Edge Sites.

  • Click Manage > Manage K8s > K8s Clusters.

  • Click Add K8s Cluster.

Figure: K8s Cluster Creation
Figure: K8s Cluster Creation

Step 2: Configure metadata and access sections.
  • In the Metadata section, enter a name.

  • Optionally, set labels and add a description.

  • In the Access section, select the Enable Site Local API Access option from the Site Local Access menu. This enables local access to K8s cluster.

  • In the Local Domain field, enter a local domain name for the K8s cluster in the <sitename>.<localdomain> format. The local K8s API server will become accessible via this domain name.

  • From the Port for K8s API Server menu, select Custom K8s Port and enter a port value in the Custom K8s Port field. This example uses the default option Default k8s Port.

Figure: Access Section Configuration
Figure: Access Section Configuration

  • From the VoltConsole Access menu, select the Enable VoltConsole API Access option.

Note: You can download the global kubeconfig for the managed K8s cluster only when you enable VoltConsole API access. Also, if you do not enable the API access, monitoring of the cluster is done via metrics.

Step 3: Configure security section.

The security configuration is enabled with default settings for pod security policies, K8s cluster roles, and K8s cluster role bindings. Optionally, you can enable custom settings for these fields.

Step 3.1: Configure custom pod security policies.
  • In the Security section, select the Custom Pod Security Policies option from the Pod Security Policies menu.

  • Use the List of Pod Security Policy List drop down to select a K8s pod security policy from the list of displayed options or create a new policy and attach. This example shows creating a new policy.

Note: Using default pod security policy allows all of your workloads. If you configure a custom policy, then everything is disabled, and you must explicitly configure the pod security policy rules to allow workloads.

Create a new policy per the following guidelines:

  • Click in the List of Pod Security Policy List field, and then click Create new K8s Pod Security Policy to open a new policy form.

Figure: New Custom Pod Security Policy
Figure: New Custom Pod Security Policy

  • In the Metadata section, enter a name.

  • Optionally, set labels and add a description.

  • Under the Pod Security Policy Specification field, click Configure and perform the following:

Step 3.1.1: Optionally, configure the privileges and capabilities.

In the Privilege and Capabilities section, configure the options per the following guidelines:

  • Enable the Privileged, Allow Privilege Escalation, and Default Allow Privilege Escalation options.

  • From the Change Default Capabilities menu, select the Custom Default Capabilities option.

  • In the List of Capability List field, select the See Common Values option from the Enter capability list drop-down menu. Select an option from the list. You can add more choices using the Add item option.

  • From the Allowed Add Capabilities menu, select the Allowed Add Capabilities option.

  • In the List of Capability List field, select the See Common Values option from the Enter capability list drop-down menu. Select an option from the list. You can add more choices using the Add item option.

  • From the Drop from K8s Default Capabilities menu, select the Drop Capabilities option.

  • In the List of Capability List field, select the See Common Values option from the Enter capability list drop-down menu. Select an option from the list. You can add more choices using the Add item option.

Step 3.1.2: Optionally, configure the volumes and mounts.

Configure the Volumes and Mounts section per the following guidelines:

  • Click Add item under List of Volume, List of Allowed Flex Volumes, Host Path Prefix, and List of Allowed Proc Mounts fields. Enter the values for those fields. You can add multiple entries using the Add item option for each of these fields.

Note: Leaving an empty value for List of Volumes disables any volumes. For the rest of the fields, the default values are applied. In case of Host Path Prefix, you can turn on the Read Only slider to mount a read-only volume.

  • Enable the Read Only Root Filesystem option so that containers run with read-only root file system.
Step 3.1.3: Optionally, configure host access and sysctl.

Configure the Host Access and Sysctl section per the following guidelines:

  • Enable the Host Network, Host IPC, and Host PID options to allow the use of host network, host IPC, and host PID in the pod spec.

  • Enter port ranges in the Host Ports Ranges field to expose those host ports.

Step 3.1.4: Optionally, configure security context.

Configure the Security Context section per the following guidelines:

  • From the Select Runs As User menu, select Run As User.

  • From the Select Runs As Group menu, select Run As Group.

  • From the Select Supplemental Groups menu, select Supplemental Groups Allowed.

  • From the Select FS Groups menu, select FS Groups Allowed.

  • For each of the fields above, enter the following configuration:

    • Enter ID values in the Starting ID and Ending ID fields. You can add more ranges using the Add item option.

    • From the Rule menu, select the See Common Values option to expand the choices.

    • Select one option only from MustRunAs, MayRunAs, or RunAsAny.

  • Click Apply.

  • Click Continue to create and apply the pod security policy to the K8s cluster.

Note: You can add more pod security policies using the Add item option.

Step 3.2: Configure K8s cluster role.
  • From the K8s Cluster Roles menu, select the Custom K8s Cluster Roles option.

  • Click in the List of Cluster Role List field and select a role from the displayed list or click Create new K8s Cluster Role to create and attach it. This example shows creating a new cluster role.

Configure the cluster role object per the following guidelines:

  • In the Metadata section, enter a name.

  • Optionally, set labels and add a description.

  • In the Cluster Role section, select Policy Rule List or Aggregate Rule from the Rule Type menu.

    • For the Policy Rule List option, click Add Item.

    • From the Select Resource menu, select List of Resources or List of Non Resource URL(s).

      • For the List of Resources option, perform the following:

        • In the List of API Groups field, click in the Enter api groups menu, and then click See Common Values. Select an option. You can add more than one list using the Add item option.

        • In the List of Resource Types field, click in the Enter resources type menu, and then click See Common Values. Select an option. You can add more than one resource using the Add item option.

        • In the List of Resource Instances field, click Add item. Enter a list of resource instances. You can add more than one resource using the Add item option.

        • In the List of Allowed Verbs field, click in the Enter allowed verbs menu, and then click See Common Values. Select an option. You can add more than one entry using the Add item option. Alternatively, you can enter the asterisk symbol (*) to allow all operations on the resources.

      • For List of Non Resource URL(s) option, perform the following:

        • Enter URLs that do not represent K8s resources in the List of Non Resource URL(s) field. You can add more than one entry using the Add item option.

        • Enter allowed list of operations in the List of Allowed Verbs field. You can add more than one entry using the Add item option. Alternatively, you can enter the asterisk symbol (*) to allow all operations on the resources.

      • Click Add Item.

Note: You can add more than one list of resources in case of Policy Rule List option.

  • For the Aggregate Rule option, click on the Selector Expression field and set the label expression by performing the following:

    • Select a key or type a custom key by clicking Add label.

    • Select an operator and select a value or type a custom value.

Note: You can add more than one label expressions for the aggregate rule. This will aggregate all rules in the roles selected by the label expression.

  • Click Continue to create and assign the K8s cluster role.

Note: You can add more cluster roles using the Add item option.

Step 3.3: Configure K8s cluster role bindings.
  • From the K8s Cluster Role Binding List menu, select the K8s Cluster Role Bindings option.

  • Click on the List of Cluster role bindings field and then select a role binding from the displayed list or click Create new K8s Cluster Role Binding to create and attach it. This example shows creating a new cluster role binding.

Configure the cluster role binding per the following guidelines:

  • Enter a name in the Metadata section.

  • In the K8s Cluster Role section, select the role you created in the previous step.

  • In the Subjects section, select one of the following options in the Select Subject field:

    • Select User and then enter a user in the User field.

    • Select Service Account and then enter a namespace and service account name in the Namespace and Name fields, respectively.

    • Select Group and enter a group in the Group field.

    • Click Add Item.

  • Click Continue to create and assign the K8s cluster role binding.

Note: You can add more cluster role bindings using the Add item option.

Step 3.4: Advanced K8s cluster security settings.

Figure: Advanced PK8s security settings
Figure: Advanced PK8s security settings

  • If you have Docker insecure resgistries for this cluster, select Docker insecure registries in the Docker insecure registries field, and enter the insecure registry into the list. Use Add item to add more insecure registries.
  • If you want to allow Cluster Scoped Roles, RoleBindings, MutatingWebhookConfiguration, and ValidatingWebhookConfiguration Access, select the Allow K8s API Access to ClusterRoles, ClusterRoleBindings, MutatingWebhookConfiguration and ValidatingWebhookConfiguration option.

Note: Once webhooks are allowed, you can use kubectl against any managed k8s global kubeconfig to setup the webhook.

For example, to create a webhook:

$ kubectl --kubeconfig kubeconfig_global.yml create -f https://raw.githubusercontent.com/chaos-mesh/chaos-mesh/master/config/webhook/manifests.yaml

And to delete the webhoook:

$ kubectl --kubeconfig kubeconfig_global.yml delete -f https://raw.githubusercontent.com/chaos-mesh/chaos-mesh/master/config/webhook/manifests.yaml

Step 4: Configure cluster-wide applications.

The default option is set to No Cluster Wide Applications.

  • To configure this option, first enable the Show Advanced Fields option and then select Add Cluster Wide Applications from the K8s Cluster Wide Applications menu.

  • Click Configure.

  • Enable the Show Advanced Fields option.

  • Click Add Item.

  • From the Select Cluster Wide Application, select an option.

  • If you select Argo CD, click Configure and complete the settings and click Apply.

  • After you finish, click Add Item.

  • Click Apply.

Step 5: Complete creating the K8s cluster.

Click Save and Exit to complete creating the K8s cluster object.


Attach K8s Cluster to App Stack Site

Perform the following steps to attach a K8s cluster:

Note: This example does not show all the steps required for App Stack site creation. For complete instructions, see Create App Stack Site.

Step 1: Start creating the App Stack site.
  • Log into Console.

  • Click Cloud and Edge Sites.

  • Click Manage > Site Management > App Stack Sites.

  • Click Add App Stack Site.

Step 2: Attach the K8s cluster.
  • In the Advanced Configuration section, enable the Show Advanced Fields option.

  • From the Site Local K8s API access menu, select Enable Site Local K8s API access.

  • Click on the Enable Site Local K8s API access field and select the K8s cluster created in the previous step.

Step 3: Complete creating App Stack site.
  • Install nodes and complete registration for the App Stack site. For more information, see Perform Registration chapter of the Create App Stack Site document.

  • Click Save and Exit to complete creating the App Stack site.

Step 4: Download the kubeconfig file for the K8s cluster.
  • Navigate to Managed K8s > Overview.

  • Click ... for your App Stack site enabled with managed K8s and perform one of the following:

    • Select Download Local Kubeconfig for managing your cluster locally, when the cluster is on an isolated network.

    • Select Download Global Kubeconfig for managing your cluster remotely from anywhere.

Note: The Download Global Kubeconfig option is enabled only when you enable Console API access.

  • Save the kubeconfig file to your local machine.

You can use this kubeconfig file for performing operations on local K8s. This is similar to the regular K8s operations using tools like kubectl.

Note: You may have to manage name resolution for your domain for K8s API access.

Note: Local kubeconfig will expire in 15 days regardless of your credential expiratiton policy. You will need to get a new Kubeconfig after that.

Step 5: Deploy applications to the managed K8s cluster.

Prepare a deployment manifest for your application and deploy using the kubeconfig file downloaded in the previous step.

  • Type kubectl apply -f k8s-app-manifest.yaml --kubeconfig k8s-kubecfg.yaml.
kubectl apply -f k8s-app-manifest.yaml --kubeconfig k8s-kubecfg.yaml
  • To verify deployment status, type kubectl get pods --kubeconfig k8s-kubecfg.yaml.
kubectl get pods --kubeconfig k8s-kubecfg.yaml

Note: In case you are using the local kubeconfig to manage the cluster, ensure that you resolve the domain name of the cluster to the IP address of the cluster. You can obtain the domain name from the kubeconfig file.


Manage PK8s with VMs

Managed k8s (or physical k8s or pk8s) can be provisioned with the ability to run VMs in Kubernetes. Appstack sites also support VMs.

KubeVirt allows your virtual machine workloads to be run as pods inside a Kubernetes cluster. This allows you to manage them with Kubernetes without having to convert them to containers.

Step 1: Download Kubeconfig file for your pk8s.
  • Log into Console.
  • Select the Cloud and Edge Sites service.
  • Click Managed K8s > Overview and find your K8s cluster.
  • Click ... under the Actions column for your cluster to download its kubeconfig. You can download the local or global kubeconfig file.

Figure: Download PK8s Kubeconfig
Figure: Download PK8s Kubeconfig

Step 2: Create a YAML manifest for your VM configuration.

Below is a sample VM configuration. For this example, we have named this YAML manifest vm-sample.yaml.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: testvm
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: testvm
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=
Step 3: Create the VM using kubectl.
  • Switch to a command line interface (CLI).
  • Create the VM using the following command:
kubectl apply -f vm-sample.yaml --kubeconfig ves_system_pk8s4422_kubeconfig_global.yaml`

Note: This example uses the --kubeconfig option to be more explicit. You can leave this option out if you 1) name your kubeconfig file config and place it in your $HOMT/.kube directory, or 2) use the KUBECONFIG environment variable.

Step 4: Interact with the VM on the CLI.

Once the VM is applied to the K8s Cluster, you can interact with it using an F5 Distributed cloud CLI tool called virtctl. You can use virtctl to control the VM with a number of commands:

  • start
  • stop
  • pause
  • restart
  • console
  • etc.

To get virtctl, download the binary using one of the following commands (get the darwin version for macOS or the linux version for a Linux system.

curl -LO "https://downloads.volterra.io/releases/virtctl/$(curl -s https://downloads.volterra.io/releases/virtctl/latest.txt)/virtctl.darwin-amd64" 

curl -LO "https://downloads.volterra.io/releases/virtctl/$(curl -s https://downloads.volterra.io/releases/virtctl/latest.txt)/virtctl.linux-amd64"

Next, make the binary executable.

chmod +x virtctl.darwin-amd64 

Interact with your VM with commands such as the following:

./virtctl.darwin-amd64 pause <vm-name> --kubeconfig <path-to-kubeconfig>

Note: You can run virtctl with no parameters to see the full list of available commands.

Note: Console also provides some controls for your VM. See Monitor your Managed K8s.

Step 5: Delete your VM.

You can delete (destroy) the VM using the following command:

kubectl delete -f vm-sample.yaml --kubeconfig ves_system_pk8s4422_kubeconfig_global.yaml`

Example: CI/CD Using In-Cluster Service Account

This example provides steps to set up CI/CD jobs for your app deployments using an in-cluster service account, to access local K8s API for the managed K8s clusters on App Stack sites.

Perform the following to set up the GitLab runner for your apps on App Stack sites using managed K8s:

Step 1: Start creating a K8s cluster object.
  • Navigate to Manage > Manage K8s > K8s Clusters.

  • Click Add K8s Cluster.

  • Enter a name in the Metadata section.

  • In the Access section, select Enable Site Local API Access option from the Site Local Access menu.

  • In the Local Domain field, enter a local domain name for the K8s cluster in the <sitename>.<localdomain> format.

  • From the VoltConsole Access menu, select the Enable VoltConsole API Access option.

Figure: Enable Local K8s and App Console Access
Figure: Enable Local K8s and Distributed Cloud Console Access

Step 2: Add role and role binding for your service account.

First create a role with policy rules to set full permissions to all resources. Afterward, create role binding for this role to your service account.

Step 2.1: Create K8s cluster role.
  • From the K8s Cluster Roles menu in the Security section, select the Custom K8s Cluster Roles option.

  • Click in the Cluster roles field and then click Create new K8s Cluster Role to create and attach a role. Configure the cluster role object per the following guidelines:

  • Enter a name for your cluster role object in the Metadata section.

  • In the Cluster Role section, select Policy Rule List in the Rule Type menu.

  • Click Add Item.

  • Set policy rules in the cluster role sections allowing access to all resources as shown in the following image:

Figure: K8s Cluster Role Rules
Figure: K8s Cluster Role Rules

  • Click Add Item at the bottom of the form to savel the policy rule.

  • Click Continue at the bottom of the form to savel the cluster role.

Step 2.2: Create role binding.

Create a role binding and then attach your role to the role binding for the service account specified in the system:serviceaccount:$RUNNER_NAMESPACE:default format. This example uses the test namespace.

  • Select the K8s Cluster Role Bindings option from the K8s Cluster Role Bindings menu.

  • Click in the Cluster role bindings field, and then click Create new K8s Cluster Role Binding to create and attach it.

  • Enter a name in the Metadata section.

  • From the K8s Cluster Role menu, select the role you created in the previous step.

  • In the Subjects section, click Add Item.

  • From the Select Subject menu, select Service Account.

  • Enter a namespace and service account name in the Namespace and Name fields, respectively. This example sets test namespace and system:serviceaccount:test:default as the service account name.

Figure: K8s Cluster Role Binding with Service Account as Subject
Figure: K8s Cluster Role Binding with Service Account as Subject

  • Click Add Item.

  • Click Continue to create and assign the K8s cluster role binding.

Step 3: Complete cluster creation.

Verify that the K8s cluster role and role binding are applied to the cluster configuration and click Save and Exit.

Figure: K8s Cluster with Role and Role Binding
Figure: K8s Cluster with Role and Role Binding

Step 4: Create an AppStack site to attach the cluster created in the previous step.
  • Click Manage > Site Management > App Stack Sites.

  • Click Add App Stack Site to start creating an App Stack Site.

  • Configure your App Stack site per the guidelines provided in the Create App Stack Site guide.

  • In the Advanced Configuration section, enable the Show Advanced Fields option.

  • Select Enable Site Local K8s API access from the Site Local K8s API access menu.

  • Click on the Enable Site Local K8s API access field and select the cluster you created in the previous step from the list of clusters displayed.

Figure: App Stack Site Enabled with Site Local K8s Access
Figure: App Stack Site Enabled with Site Local K8s Access

  • Click Save and Exit.
Step 5: Register your App Stack site and download the kubeconfig file.

Deploy a site matching the name and hardware device you defined in the App Stack site.

  • Click Manage > Site Management > Registrations and approve your App Stack site registration.

  • Check that your site shows up in the Sites > Site List view.

  • Click Managed K8s > Overview and find your K8s cluster. Click ... under the Actions columnn for your cluster to download its kubeconfig. You can download the local or global kubeconfig file.

  • Ensure that the domain name you specified in the K8s cluster is resolved.

Note: For example, you can add an entry in the /etc/hosts file for your domain with the VIP of the K8s cluster object. You can obtain the FQDN and IP address from the kubeconfig file and Console (node IP address from your App Stack site Nodes view), respectively. However, it is recommended that you manage DNS resolution for your domains.

Step 6: Deploy a GitLab runner onto your K8s cluster.
curl https://gitlab.com/gitlab-org/charts/gitlab-runner/-/raw/master/values.yaml > values.yaml
  • Enter your GitLab URL and runner registration token to the values.yaml file.
echo "gitlabUrl: https://gitlab.com/" >> values.yaml
echo "runnerRegistrationToken: foobar" >> values.yaml

Note: Replace foobar with your registration token.

  • Set the KUBECONFIG environment variable to the downloaded kubeconfig file.
export KUBECONFIG=<PK8S-Kubeconfig>
  • Deploy GitLab runners onto your K8s cluster using the kubeconfig file and the values.yaml file. The commands depend on the Helm version.

For Helm 2:

helm install --namespace <PK8_NAMESPACE> --name gitlab-runner -f values.yaml gitlab/gitlab-runner

For Helm 3:

helm install --namespace <PK8_NAMESPACE> gitlab-runner -f values.yaml gitlab/gitlab-runner

Note: You can create a namespace in your K8s cluster using kubectl.

Step 7: Verify that the runners are operational.

Check that the pods are started in Console.

  • Click Managed K8s > Overvew.

  • Click the name of your App Stack site.

  • Select the Pods tab to check that the runner pods are started.

  • Go to your GitLab CI/CD page and verify that the same runners are created there.

  • Go to your project in GitLab and navigate to Settings > CI/CD.

  • Click Expand in the Runners section.

  • Check that your runners appear under Available Specific Runners.

Figure: GitLab Runners
Figure: GitLab Runners

Note: The names of the runners are the same as the runner pod names in Console.


Concepts


API References