Create and Deploy Managed K8s
On This Page:
Objective
This document provides instructions on how to create a managed Kubernetes (K8s) cluster and deploy it on the F5® Distributed Cloud App Stack site. Managed K8s cluster is similar in principle to regular K8s, and you can use command-line interface (CLI) tools like kubectl
to perform operations that are common to regular K8s. The F5 Distributed Cloud Platform provides a mechanism to easily deploy applications using managed K8s across App Stack sites that form DC clusters. To learn more about deploying an App Stack site, see Create App Stack Site.
Using the instructions provided in this guide, you can create a managed K8s cluster, associate it with an App Stack site, and deploy applications using its kubeconfig file.
Note: Managed K8s is also known as physical K8s.
Virtual K8s and Managed K8s
You can use both Distributed Cloud Virtual K8s (vK8s) and managed K8s for your applications. However, the major difference between Distributed Cloud vK8s and managed K8s is that you can create managed K8s only on Sites with App Stack functionality (in case of cloud Sites, the site type App Stack Cluster). The vK8s can be created in all types of Distributed Cloud sites, including Distributed Cloud Regional Edge (RE) sites. Also, you can deploy managed K8s across all namespaces of an App Stack site and manage the K8s operations using a single kubeconfig. The vK8s is created per namespace per site. While you can use vK8s and managed K8s in the same site, operations on the same namespace using both are not supported. See Restrictions for more information.
Reference Architecture
The image below shows reference architecture for the CI/CD jobs for app deployments to production and development environments. The production and development environments are established in different App Stack sites. Each site has CI/CD jobs and apps deployed in separate namespaces. The git-ops-prod
and git-ops-dev
namespaces have CI/CD jobs, such as ArgoCD, Concourse CI, Harbor, etc. These are integrated using the in-cluster service account. Services such as ArgoCD UI can be advertised on a site local network and users can access it for monitoring the CD dashboard.

This image shows different types of service advertisements and communication paths for services deployed in different namespaces in the sites:

Note: You can disable communication between services of different namespaces.
Prerequisites
-
An F5 Distributed Cloud Account. If you do not have an account, see Create an Account.
-
A K8s application deployment manifest.
Restrictions
The following restrictions apply:
-
Using vK8s and managed K8s in the same site is supported. However, if the namespace is already used local to a managed K8s cluster, then any object creation in that namespace using vK8s is not supported for that site. Conversely, if a namespace is already used by vK8s, then operations on local managed K8s cluster are not supported.
-
Managed K8s is supported only for an App Stack site and not supported for other sites.
-
Managed K8s can be enabled by applying it before provisioning the App Stack site. It is not supported for enabling by updating an existing App Stack site.
-
Managed K8s cannot be disabled once it is enabled.
-
In case of managed K8s, Role and RoleBinding operations are supported via kubectl. However, ClusterRoleBinding, PodSecurityPolicy, and ClusterRole are not supported for kubectl. These can be configured only through F5 Distributed Cloud Console.
Note: As BGP advertisement for VIPs is included by default in Cloud Services-managed K8s, the
NodePort
service type is not required. Kubernetes service typeLoadBalancer
andNodePort
do not advertise the service outside the K8s cluster. These function in the same way as theClusterIP
service type.
Configuration
Enabling a managed K8s cluster on an App Stack site requires you to first create it and apply it during App Stack site creation. You can also create a newly managed K8s cluster as part of the App Stack site creation process. This documentation shows creating a K8s cluster separately and attaches it to an App Stack site during creation.
Create Managed K8s Cluster
Perform the following steps to create a managed K8s cluster:
Step 1: Start K8s cluster object creation.
-
Log into Console.
-
Click
Multi-Cloud Network Connect
. -
Click
Manage
>Manage K8s
>K8s Clusters
. -
Click
Add K8s Cluster
.

Step 2: Configure metadata and access sections.
-
In the
Metadata
section, enter a name. -
Optionally, set labels and add a description.
-
In the
Access
section, select theEnable Site Local API Access
option from theSite Local Access
menu. This enables local access to K8s cluster. -
In the
Local Domain
field, enter a local domain name for the K8s cluster in the<sitename>.<localdomain>
format. The local K8s API server will become accessible via this domain name. -
From the
Port for K8s API Server
menu, selectCustom K8s Port
and enter a port value in theCustom K8s Port
field. This example uses the default optionDefault k8s Port
.

- From the
VoltConsole Access
menu, select theEnable VoltConsole API Access
option.
Note: You can download the global kubeconfig for the managed K8s cluster only when you enable
VoltConsole API access
. Also, if you do not enable the API access, monitoring of the cluster is done via metrics.
Step 3: Configure security settings.
The security configuration is enabled with default settings for pod security policies, K8s cluster roles, and K8s cluster role bindings. Optionally, you can enable custom settings for these fields.
Step 3.1: Configure custom pod security policies.
-
In the
Security
section, select theCustom Pod Security Policies
option from thePod Security Policies
menu. -
Use the
Pod Security Policy List
drop-down menu to select a K8s pod security policy from the list of displayed options or create a new policy and attach. This example shows creating a new policy.
Note: Using default pod security policy allows all of your workloads. If you configure a custom policy, then everything is disabled, and you must explicitly configure the pod security policy rules to allow workloads.
Create a new policy per the following guidelines:
- Click in the
Pod Security Policy List
field, and then clickAdd Item
to open a new policy form.

-
In the
Metadata
section, enter a name. -
Optionally, set labels and add a description.
-
Under the
Pod Security Policy Specification
field, clickView Configuration
and perform the following:
Step 3.1.1: Optionally, configure the privileges and capabilities.
In the Privilege and Capabilities
section, configure the options per the following guidelines:
-
Enable the
Privileged
,Allow Privilege Escalation
, andDefault Allow Privilege Escalation
options. -
From the
Change Default Capabilities
menu, select theCustom Default Capabilities
option. -
In the
Capability List
field, select theSee Common Values
option from theEnter
drop-down menu. Select an option from the list. You can add more choices using theAdd item
option. -
From the
Allowed Add Capabilities
menu, select theAllowed Add Capabilities
option. -
In the
Capability List
field, select theSee Common Values
option from theEnter
drop-down menu. Select an option from the list. You can add more choices using theAdd item
option. -
From the
Drop from K8s Default Capabilities
menu, select theDrop Capabilities
option. -
In the
List of Capability List
field, select theSee Common Values
option from theEnter
drop-down menu. Select an option from the list. You can add more choices using theAdd item
option.
Step 3.1.2: Optionally, configure the volumes and mounts.
Configure the Volumes and Mounts
section per the following guidelines:
- Click
Add item
underVolume
,Allowed Flex Volumes
,Host Path Prefix
, andAllowed Proc Mounts
fields. Enter the values for those fields. You can add multiple entries using theAdd item
option for each of these fields.
Note: Leaving an empty value for
Volumes
disables any volumes. For the rest of the fields, the default values are applied. In case ofHost Path Prefix
, you can turn on theRead Only
slider to mount a read-only volume.
- Enable the
Read Only Root Filesystem
option so that containers run with read-only root file system.
Step 3.1.3: Optionally, configure host access and sysctl.
Configure the Host Access and Sysctl
section per the following guidelines:
-
Enable the
Host Network
,Host IPC
, andHost PID
options to allow the use of host network, host IPC, and host PID in the pod spec. -
Enter port ranges in the
Host Ports Ranges
field to expose those host ports.
Step 3.1.4: Optionally, configure security context.
Configure the Security Context
section per the following guidelines:
-
From the
Select Runs As User
menu, selectRun As User
. -
From the
Select Runs As Group
menu, selectRun As Group
. -
From the
Select Supplemental Groups
menu, selectSupplemental Groups Allowed
. -
From the
Select FS Groups
menu, selectFS Groups Allowed
. -
For each of the fields above, enter the following configuration:
-
Click
Add item
and enter ID values in theStarting ID
andEnding ID
fields. You can add more ranges using theAdd item
option. -
From the
Rule
menu, select theSee Common Values
option to expand the choices. -
Select one option only from
MustRunAs
,MayRunAs
, orRunAsAny
.
-
-
Click
Apply
. -
Click
Continue
to create and apply the pod security policy to the K8s cluster.
Note: You can add more pod security policies using the
Add item
option.
Step 3.2: Configure K8s cluster role.
-
From the
K8s Cluster Roles
menu, select theCustom K8s Cluster Roles
option. -
Click in the
Cluster Role List
field and select a role from the displayed list or clickAdd Item
to create and attach it. This example shows creating a new cluster role.
Configure the cluster role object per the following guidelines:
-
In the
Metadata
section, enter a name. -
Optionally, set labels and add a description.
-
In the
Cluster Role
section, selectPolicy Rule List
orAggregate Rule
from theRule Type
menu.-
For the
Policy Rule List
option, clickAdd Item
. -
From the
Select Resource
menu, selectList of Resources
orList of Non Resource URL(s)
.-
For the
List of Resources
option, perform the following:-
In the
List of API Groups
field, click in theEnter API Groups
menu, and then clickSee Common Values
. Select an option. You can add more than one list using theAdd item
option. -
In the
Resource Types
field, click in theEnter Resources Types
menu, and then clickSee Common Values
. Select an option. You can add more than one resource using theAdd item
option. -
In the
Resource Instances
field, clickAdd item
. Enter a list of resource instances. You can add more than one resource using theAdd item
option. -
In the
Allowed Verbs
field, click in theEnter Allowed Verbs
menu, and then clickSee Common Values
. Select an option. You can add more than one entry using theAdd item
option. Alternatively, you can enter the asterisk symbol (*
) to allow all operations on the resources.
-
-
-
For
List of Non Resource URL(s)
option, perform the following:-
Enter URLs that do not represent K8s resources in the
Non Resource URL(s)
field. You can add more than one entry using theAdd item
option. -
Enter allowed list of operations in the
Allowed Verbs
field. You can add more than one entry using theAdd item
option. Alternatively, you can enter the asterisk symbol (*
) to allow all operations on the resources. -
Click
Apply
.
-
-
Note: You can add more than one list of resources in case of
Policy Rule List
option.
-
For the
Aggregate Rule
option, click on theSelector Expression
field and set the label expression by performing the following:-
Select a key or type a custom key by clicking
Add label
. -
Select an operator and select a value or type a custom value.
-
Note: You can add more than one label expressions for the aggregate rule. This will aggregate all rules in the roles selected by the label expression.
- Click
Continue
to create and assign the K8s cluster role.
Note: You can add more cluster roles using the
Add item
option.
Step 3.3: Configure K8s cluster role bindings.
-
From the
K8s Cluster Role Bindings
menu, select theCustom K8s Cluster Role Bindings
option. -
Click on the
Cluster Role Bindings List
field and then select a role binding from the displayed list, or clickAdd item
to create and attach it. This example shows creating a new cluster role binding.
Configure the cluster role binding per the following guidelines:
-
Enter a name in the
Metadata
section. -
From the
K8s Cluster Role
menu, select the role you created in the previous step. -
In the
Subjects
section, select one of the following options in theSelect Subject
field:-
Click
Add item
. -
Select
User
and then enter a user in theUser
field. -
Select
Service Account
and then enter a namespace and service account name in theNamespace
andName
fields, respectively. -
Select
Group
and enter a group in theGroup
field. -
Click
Apply
.
-
-
Click
Continue
to create and assign the K8s cluster role binding.
Note: You can add more cluster role bindings using the
Add item
option.
Step 3.4: Advanced K8s cluster security settings.
-
If you have Docker insecure registries for this cluster, select
Docker insecure registries
from theDocker insecure registries
menu, and enter the insecure registry into the list. UseAdd item
to add more insecure registries. -
If you want to allow Cluster Scoped Roles, RoleBindings, MutatingWebhookConfiguration, and ValidatingWebhookConfiguration Access, select the
Allow K8s API Access to ClusterRoles, ClusterRoleBindings, MutatingWebhookConfiguration and ValidatingWebhookConfiguration
option.
Note: Once webhooks are allowed, you can use
kubectl
against any managed k8s global kubeconfig to set up the webhook.For example, to create a webhook:
$ kubectl --kubeconfig kubeconfig_global.yml create -f https://raw.githubusercontent.com/chaos-mesh/chaos-mesh/master/config/webhook/manifests.yaml
And to delete the webhook:
$ kubectl --kubeconfig kubeconfig_global.yml delete -f https://raw.githubusercontent.com/chaos-mesh/chaos-mesh/master/config/webhook/manifests.yaml
Step 4: Configure cluster-wide applications.
The default option is set to No Cluster Wide Applications
.
-
To configure this option, select
Add Cluster Wide Applications
from theK8s Cluster Wide Applications
menu. -
Click
Configure
. -
Click
Add Item
. -
From the
Select Cluster Wide Application
, select an option. -
If you select
Argo CD
, clickConfigure
and complete the settings and clickApply
. -
After you finish, click
Add Item
. -
Click
Apply
.
Step 5: Complete creating the K8s cluster.
Click Save and Exit
to complete creating the K8s cluster object.
Attach K8s Cluster to App Stack Site
Perform the following steps to attach a K8s cluster:
Note: This example does not show all the steps required for App Stack site creation. For complete instructions, see Create App Stack Site.
Step 1: Start creating the App Stack site.
-
Log into Console.
-
Click
Multi-Cloud Network Connect
. -
Click
Manage
>Site Management
>App Stack Sites
. -
Click
Add App Stack Site
.
Step 2: Attach the K8s cluster.
-
In the
Advanced Configuration
section, enable theShow Advanced Fields
option. -
From the
Site Local K8s API access
menu, selectEnable Site Local K8s API access
. -
Click on the
Enable Site Local K8s API access
field and select the K8s cluster created in the previous section.
Step 3: Complete creating App Stack site.
-
Install nodes and complete registration for the App Stack site. For more information, see Perform Registration chapter of the Create App Stack Site document.
-
Click
Save and Exit
to complete creating the App Stack site.
Step 4: Download the kubeconfig file for the K8s cluster.
-
Navigate to
Managed K8s
>Overview
. -
Click
...
for your App Stack site enabled with managed K8s and perform one of the following:-
Select
Download Local Kubeconfig
for managing your cluster locally, when the cluster is on an isolated network. -
Select
Download Global Kubeconfig
for managing your cluster remotely from anywhere.
-
Note: The
Download Global Kubeconfig
option is enabled only when you enable Console API access.
- Save the kubeconfig file to your local machine.
You can use this kubeconfig file for performing operations on local K8s. This is similar to the regular K8s operations using tools like kubectl
.
Note: You may have to manage name resolution for your domain for K8s API access. Local kubeconfig will expire in 15 days regardless of your credential expiration policy. You will need to get a new Kubeconfig after that.
Step 5: Deploy applications to the managed K8s cluster.
Prepare a deployment manifest for your application and deploy using the kubeconfig file downloaded in the previous step.
- Type
kubectl apply -f k8s-app-manifest.yaml --kubeconfig k8s-kubecfg.yaml
.
kubectl apply -f k8s-app-manifest.yaml --kubeconfig k8s-kubecfg.yaml
- To verify deployment status, type
kubectl get pods --kubeconfig k8s-kubecfg.yaml
.
kubectl get pods --kubeconfig k8s-kubecfg.yaml
Note: In case you are using the local kubeconfig to manage the cluster, ensure that you resolve the domain name of the cluster to the IP address of the cluster. You can obtain the domain name from the kubeconfig file.
Manage PK8s with VMs
Managed k8s (or physical k8s or pk8s) can be provisioned with the ability to run VMs in Kubernetes. App Stack sites also support virtual machines (Vms).
KubeVirt allows your virtual machine workloads to run as pods inside a Kubernetes cluster. This allows you to manage them with Kubernetes without having to convert them to containers.
Note: You can also configure multiple interfaces for Virtual Machines (VM) or containers running in a K8s cluster within an App Stack Site. For instructions, see Create Workloads with Multiple Network Interfaces.
Step 1: Download Kubeconfig file for your pk8s.
-
Log into Console.
-
Select the
Multi-Cloud Network Connect
service. -
Click
Managed K8s
>Overview
and find your K8s cluster. -
Click
...
under theActions
column for your cluster to download its kubeconfig. You can download the local or global kubeconfig file.

Step 2: Create a YAML manifest for your VM configuration.
Below is a sample VM configuration. For this example, the YAML manifest is named vm-sample.yaml
.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: testvm
spec:
running: true
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: testvm
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
memory: 64M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: SGkuXG4=
Step 3: Create the VM using kubectl.
-
Switch to a command line-interface (CLI).
-
Create the VM using the following command:
kubectl apply -f vm-sample.yaml --kubeconfig ves_system_pk8s4422_kubeconfig_global.yaml\`
Note: This example uses the
--kubeconfig
option to be more explicit. You can leave this option out if you 1) name your kubeconfig fileconfig
and place it in your$HOMT/.kube
directory, or 2) use theKUBECONFIG
environment variable.
Step 4: Interact with the VM on the CLI.
Once the VM is applied to the K8s Cluster, you can interact with it using an F5 Distributed Cloud CLI tool called virtctl
. You can use virtctl
to control the VM with a number of commands:
-
start
-
stop
-
pause
-
restart
-
console
-
To get
virtctl
, download the binary using one of the following commands (get thedarwin
version for macOS or thelinux
version for a Linux system):
curl -LO "https://downloads.volterra.io/releases/virtctl/\$(curl -s https://downloads.volterra.io/releases/virtctl/latest.txt)/virtctl.darwin-amd64"
curl -LO "https://downloads.volterra.io/releases/virtctl/\$(curl -s https://downloads.volterra.io/releases/virtctl/latest.txt)/virtctl.linux-amd64"
- Make the binary executable.
chmod +x virtctl.darwin-amd64
- Interact with your VM with commands, such as the following:
./virtctl.darwin-amd64 pause <vm-name> --kubeconfig <path-to-kubeconfig>
Note: You can run
virtctl
with no parameters to see the full list of available commands. Console also provides some controls for your VM. See Monitor your Managed K8s.
Step 5: Delete your VM.
You can delete (destroy) the VM using the following command:
kubectl delete -f vm-sample.yaml --kubeconfig ves_system_pk8s4422_kubeconfig_global.yaml\`
Export VMs on PK8s
Step 1: Create a PersitentVolumeClaim (PVC) to be used by the machine.
Use kubectl to create the PVC, as follows:
kubectl apply -f <manifest.yaml> --kubeconfig <kubeconfig of Pk8s>
The manifest.yml file is shown below:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fedora-data
namespace: default
spec:
accessModes:
resources:
requests:
storage: 8Gi
Step 2: Create a VM.
Use kubectl to create the VM:
kubectl apply -f <manifest.yaml> --kubeconfig <kubeconfig of Pk8s>
manifest.yml file:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: fedora
namespace: default
spec:
running: true
template:
spec:
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
- name: emptydisk
disk:
bus: virtio
- name: data
disk:
bus: virtio
volumes:
- name: data
persistentVolumeClaim:
claimName: fedora-data
- name: emptydisk
emptyDisk:
capacity: "2Gi"
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDyqbOXVV1s1eLCeasY9rl05uo+aCy0mgIZMff1o53UB8y5wniCJ+angW0soMBu0V51iNwn1wlgnC5LGfIetdcpGNrBHx9z4Mbiyq9uuh+tgfKS+hT4UMXc5l8tdlxbMoOtwe4MgtKA1/iFN6bLW+6xeVniqpppscOpug0zuChEtmy24xUCZzWPDKl56U5z6kP4HNLyq942DukqDo6csjR2qEagqqTrQoGNRuabKqhhKq/o4CX8ql37CwgR7EZpjS9iEmRPdNQ70FYoxYcW6EB+iYAUjnaOhUHytsImT9J7jNJDytzcqQlhyS/h/iIvVnh73trX3Od10WKIKhkhrCOD root@prague-karlin-01
Step 3: Attach the volume (referenced by PVC).
Attach the volume referenced by PVC
Step 4: Write some data to the volume (content for the snapshot of the VM).
Add data so that the snapshot contains data when exported.
Step 5: Stop the VM.
Use virtctl stop or the stop option of managed K8s monitoring to stop the VMs. See Monitor your Manged PK8s for more details.
Step 6: Create a Secret and a VM export.
First create the secret:
kubectl apply -f <manifest.yaml>
manifest.yaml file:
apiVersion: v1
kind: Secret
metadata:
name: example-token
namespace: default
stringData:
token: 1234567890ab
Next, create the VM export:
kubectl apply -f <manifest.yaml>
manifest.yaml file:
apiVersion: export.kubevirt.io/v1alpha1
kind: VirtualMachineExport
metadata:
name: example-export
namespace: default
spec:
tokenSecretRef: example-token
source:
apiGroup: "kubevirt.io"
kind: VirtualMachine
name: fedora
Step 7: Verify the details of export service.
Use kubectl to view information on the pods and services.
kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/virt-export-example-export 2/2 Running 0 91m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.3.0.1 <none> 443/TCP 5d2h
service/virt-export-example-export ClusterIP 10.3.66.241 <none> 443/TCP 91m
Use kubectl to view the details of the VM export.
kubectl describe vmexport
Name: example-export
Namespace: default
Labels: <none>
Annotations: <none>
API Version: export.kubevirt.io/v1alpha1
Kind: VirtualMachineExport
Metadata:
Creation Timestamp: 2023-02-01T12:27:21Z
Generation: 3
Manager: virt-controller
Operation: Update
Time: 2023-02-01T12:27:48Z
Resource Version: 2037069
Self Link: /apis/export.kubevirt.io/v1alpha1/namespaces/default/virtualmachineexports/example-export
UID: bf233d36-8506-4c17-b832-b176f3f75e76
Spec:
Source:
API Group: kubevirt.io
Kind: VirtualMachine
Name: fedora
Token Secret Ref: example-token
Status:
Conditions:
Last Probe Time: <nil>
Last Transition Time: 2023-02-01T12:27:48Z
Reason: PodReady
Status: True
Type: Ready
Last Probe Time: <nil>
Last Transition Time: 2023-02-01T12:27:21Z
Reason: Unknown
Status: False
Type: PVCReady
Links:
Internal:
Cert: -----BEGIN CERTIFICATE-----
MIIDFDCCAfygAwIBAgIIH+Tn2GKetiAwDQYJKoZIhvcNAQELBQAwKDEmMCQGA1UE
...
pek+ufsPtp8rYxdcRcxex+f0ZwzkFoVd
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDFDCCAfygAwIBAgIIRCCSFn/EAG4wDQYJKoZIhvcNAQELBQAwKDEmMCQGA1UE
...
0Kx2yQT1nDavnRGnKjddclk8wJmYSqLj
-----END CERTIFICATE-----
Volumes:
Formats:
Format: dir
URL: https://virt-export-example-export.default.svc/volumes/fedora-data/dir
Format: tar.gz
URL: https://virt-export-example-export.default.svc/volumes/fedora-data/disk.tar.gz
Name: fedora-data
Phase: Ready
Service Name: virt-export-example-export
Token Secret Ref: example-token
Events: <none>
Step 8: Download the image using the IP address.
Get the image
wget --no-check-certificate --header="x-kubevirt-export-token: 1234567890ab" https://10.3.66.241/volumes/fedora-data/disk.tar.gz
verify the image location
ls -lah disk.img
-rw-r--r--. 1 root root 8.0G Feb 1 10:50 disk.img
Example: CI/CD Using In-Cluster Service Account
This example provides steps to set up CI/CD jobs for your app deployments using an in-cluster service account, to access local K8s API for the managed K8s clusters on App Stack sites.
Perform the following to set up the GitLab runner for your apps on App Stack sites using managed K8s:
Step 1: Start creating K8s cluster object.
-
Navigate to
Manage
>Manage K8s
>K8s Clusters
. -
Click
Add K8s Cluster
. -
Enter a name in the
Metadata
section. -
In the
Access
section, select theEnable Site Local API Access
option from theSite Local Access
menu. -
In the
Local Domain
field, enter a local domain name for the K8s cluster in the<sitename>.<localdomain>
format. -
From the
VoltConsole Access
menu, select theEnable VoltConsole API Access
option.

Step 2: Add role and role binding for your service account.
First create a role with policy rules to set full permissions to all resources. Afterward, create role binding for this role to your service account.
Step 2.1: Create K8s cluster role.
-
From the
K8s Cluster Roles
menu in theSecurity
section, select theCustom K8s Cluster Roles
option. -
Click in the
Cluster Role List
field and then clickAdd Item
to create and attach a role. Configure the cluster role object per the following guidelines:-
Enter a name for your cluster role object in the
Metadata
section. -
In the
Cluster Role
section, selectPolicy Rule List
in theRule Type
menu. -
Click
Add Item
. -
Set policy rules in the cluster role sections allowing access to all resources.
-

-
Click
Apply
at the bottom of the form to save the policy rule. -
Click
Continue
at the bottom of the form to save the cluster role.
Step 2.2: Create role binding.
-
Create a role binding and then attach your role to the role binding for the service account specified in the
system:serviceaccount:$RUNNER_NAMESPACE:default
format. This example uses thetest
namespace. -
Select the
Custom K8s Cluster Role Bindings
option from theK8s Cluster Role Bindings
menu. -
Click in the
Cluster Role Bindings List
field, and then clickAdd Item
to create and attach it:-
Enter a name in the
Metadata
section. -
From the
K8s Cluster Role
menu, select the role you created in the previous step. -
In the
Subjects
section, clickAdd Item
. -
In the
Subjects
section, clickAdd Item
. -
From the
Select Subject
menu, selectService Account
.
-
-
Enter a namespace and service account name in the
Namespace
andName
fields, respectively. This example setstest
namespace andsystem:serviceaccount:test:default
as the service account name.

-
Click
Apply
. -
Click
Continue
to create and assign the K8s cluster role binding.
Step 3: Complete cluster creation.
Verify that the K8s cluster role and role binding are applied to the cluster configuration and click Save and Exit
.

Step 4: Create an App Stack site to attach the cluster created in the previous step.
-
Click
Manage
>Site Management
>App Stack Sites
. -
Click
Add App Stack Site
to start creating an App Stack Site. -
Configure your App Stack site per the guidelines provided in the Create App Stack Site guide.
-
In the
Advanced Configuration
section, enable theShow Advanced Fields
option. -
Select
Enable Site Local K8s API access
from theSite Local K8s API access
menu. -
Click on the
Enable Site Local K8s API access
field and select the cluster you created in the previous step from the list of clusters displayed.

- Click
Save and Exit
.
Step 5: Register your App Stack site and download the kubeconfig file.
Deploy a site matching the name and hardware device you defined in the App Stack site.
-
Click
Manage
>Site Management
>Registrations
and approve your App Stack site registration. -
Check that your site shows up in the
Sites
>Site List
view. -
Click
Managed K8s
>Overview
and find your K8s cluster. -
Click
...
under theActions
column for your cluster to download its kubeconfig. You can download the local or global kubeconfig file. -
Ensure that the domain name you specified in the K8s cluster is resolved.
Note: For example, you can add an entry in the
/etc/hosts
file for your domain with the VIP of the K8s cluster object. You can obtain the FQDN and IP address from the kubeconfig file and Console (node IP address from your App Stack siteNodes
view), respectively. However, it is recommended that you manage DNS resolution for your domains.
Step 6: Deploy a GitLab runner onto your K8s cluster.
- Download the GitLab runner reference values file. See GitLab Runner Helm Chart for more information.
curl https://gitlab.com/gitlab-org/charts/gitlab-runner/-/raw/master/values.yaml > values.yaml
- Enter your GitLab URL and runner registration token to the
values.yaml
file.
echo "gitlabUrl: https://gitlab.com/" >> values.yaml
echo "runnerRegistrationToken: foobar" >> values.yaml
Note: Replace
foobar
with your registration token.
- Set the
KUBECONFIG
environment variable to the downloaded kubeconfig file.
export KUBECONFIG=<PK8S-Kubeconfig>
- Deploy GitLab runners onto your K8s cluster using the kubeconfig file and the
values.yaml
file. The commands depend on the Helm version.
For Helm 2:
helm install --namespace <PK8_NAMESPACE> --name gitlab-runner -f values.yaml gitlab/gitlab-runner
For Helm 3:
helm install --namespace <PK8_NAMESPACE> gitlab-runner -f values.yaml gitlab/gitlab-runner
Note: You can create a namespace in your K8s cluster using
kubectl
.
Step 7: Verify that the runners are operational.
Check that the pods are started in Console.
-
Click
Managed K8s
>Overview
. -
Click the name of your App Stack site.
-
Select the
Pods
tab to check that the runner pods are started. -
Go to your GitLab CI/CD page and verify that the same runners are created there.
-
Go to your project in GitLab and navigate to
Settings
>CI/CD
. -
Click
Expand
in theRunners
section. -
Check that your runners appear under
Available Specific Runners
.

Note: The names of the runners are the same as the runner pod names in Console.