Secure Kubernetes Gateway

Objective

This guide provides instructions on how to create a Secure Kubernetes Gateway using F5® Distributed Cloud Console and F5 Distributed Cloud Mesh.

The steps to create Secure Kubernetes Gateway are:

SeqSecK8GW
Figure: Steps to Deploy Secure Kubernetes Gateway

The following images shows the topology of the example for the use case provided in this document:

TopSecK8GW
Figure: Secure Kubernetes Gateway Sample Topology

Using the instructions provided in this guide, you can deploy a Secure K8s Gateway in your Amazon Virtual Private Cloud (Amazon VPC), discover the cluster services from that VPC, setup load balancer for them, and secure those services with javascript challenge and Web Application Firewall (WAF).

The example shown in this guide deploys the Secure K8s Gateway on a single VPC for an application called as hipster-shop deployed in an EKS cluster. The application consists of the following services:

  • frontend
  • cartservice
  • productcatalogservice
  • currencyservice
  • paymentservice
  • shippingservice
  • emailservice
  • checkoutservice
  • recommendationservice
  • adservice
  • cache

Prerequisites

  • F5 Distributed Cloud Console SaaS account.

    Note: If you do not have an account, see Create an Account.

  • Amazon Web Services (AWS) account.

    Note: This is required to deploy a Distributed Cloud site.

  • F5 Distributed Cloud vesctl utility.

    Note: See vesctl for more information.

  • Docker.

  • Self-signed or CA-signed certificate for your application domain.

  • AWS IAM Authenticator.

    Note: See IAM Authenticator Installation for more information.


Configuration

The use case provided in this guide sets up a Distributed Cloud site as Secure K8s Gateway for the ingress and egress traffic for the K8s cluster deployed in the Amazon VPC. The example web application has a front-end service to which all the user requests are sent and it redirects to other services accordingly. The following actions outline the activities in setting up the Secure K8s Gateway:

  1. The frontend service of the application needs to be externally available. Therefore, a HTTPS load balancer is created with origin pool pointing to the frontend service on the EKS cluster.

  2. The domain of the load balancer is delegated to Distributed Cloud Services in order to manage the domains DNS and TLS certificates.

  3. Security policies are configured to block egress communication to Google DNS for DNS query resolution and allow github, docker, AWS, and other required domains for code repository management.

  4. A WAF configuration is applied to secure the externally available load balancer VIP.

  5. Javascript challenge is set for the load balancer to apply further protection from attacks such as botnets.

Note: Ensure that you keep the Amazon Elastic IP VIPs ready for later use in configuration.

Step 1: Deploy & Secure Site

The following video shows the site deployment workflow:

Perform the following steps to deploy a Distributed Cloud site as the Secure K8s Gateway in your VPC.

Step 1.1: Log into the Console and create cloud credentials object.
  • Select the Multi-Cloud Network Connect service.
  • Select Manage > Site Management > Cloud Credentials in the configuration. Select Add Cloud Credentials.
  • Enter a name for your credentials object and select AWS Programmatic Access Credentials for the Select Cloud Credential Type field.
  • Enter your AWS access key ID in the Access Key ID field.
cc meta awskey
Figure: Credentials Meta and AWS Key Configuration
Step 1.2: Configure AWS secret access key.
  • Select Configure under the Secret Access Key field.
  • Enter your AWS secret access key into the Type field. Ensure that the Text radio button is selected.
cc secret
Figure: Secret Key Configuration
  • Select Blindfold to encrypt your secret using F5 Distributed Cloud Blindfold. The Blindfold configured message gets displayed.

  • Select Apply.

Step 1.3: Complete creating the credentials.

Select Save and Exit to complete creating the AWS cloud credentials object.

Step 1.4: Start creating AWS VPC site object.
  • Select Manage > Site Management > AWS VPC Sites in the configuration menu. Select Add AWS VPC Site.
  • Enter a name for your VPC object in the metadata section.
Step 1.4.1: Configure site type selection.
  • Go to Site Type Selection section` and perform the following:
    • Select a region in the AWS Region drop-down field. This example selects us-east-2.
    • Select New VPC Parameters for the VPC field. Enter a name in the AWS VPC Name field or enter Autogenerate VPC Name.
    • Enter the CIDR in the Primary IPv4 CIDR blocks field.
    • Select Ingress/Egress Gateway (Two Interface) for the Select Ingress Gateway or Ingress/Egress Gateway field.
aws vpc basic
Figure: AWS VPC Site Configuration of Site Type
Step 1.4.2: Configure ingress/egress gateway nodes.
  • Select Configure to open the two-interface node configuration wizard.

  • Select Add Item and enter the configuration using the following guidelines.

    • Select an option for the AWS AZ name field that matches the configured AWS Region.
    • Select New Subnet for the Workload Subnet field in the Workload Subnet section. Enter a subnet address in the IPv4 Subnet field.
    • Similarly configure a subnet address for the Subnet for Outside Interface section.
    • Select Apply to complete the two-interface node configuration.
two int cidrs
Figure: Ingress/Egress Gateway Nodes Configuration
Step 1.4.3: Configure the site network firewall policy.
  • Go to Site Network Firewall section and select Active Firewall Policies for the Manage Firewall Policy field. Use the Firewall Policy drop-down menu to select Add Item. Enter the configuration using the following guidelines:

    • Enter a name and enter a CIDR for the IPv4 Prefix List field. This CIDR should be within the CIDR block of the VPC.
nw pol cidr
Figure: Network Policy Endpoint Subnet
  • Select Configure under Ingress Rules in the Connections To Policy Endpoints section.
  • Select `Add Item' to add an ingress rule, and enter the following configuration:
    • Enter a name for the Rule Name field and select Allow for the Action field.
    • Select Any Endpoint for the Select Other Endpoint field.
    • Select Match Protocol and Port Ranges for the Select Type of Traffic to Match field.
    • Select TCP for the Protocol field.
    • Select Add item under List of Port Ranges, and then enter a port range.
    • Select the Apply button at the bottom to save the ingress rule, and then select Apply to complete the ingress rules configuration.
ingress rule
Figure: Network Policy Ingress Rule
  • Select Configure under Egress Rules in the Connections From Policy Endpoints section.
  • Select Add Item to add an egress rule to the Egress Rules list, and enter the following configuration:
    • Enter a name for the Rule Name field and select Deny for the Action field. This example configures a deny rule for Google DNS query traffic.
    • Select IPv4 Prefix List for the Select Other Endpoint field and enter 8.8.4.4/32 for the IPv4 Prefix List field.
    • Select Match Application Traffic for the Select Type of Traffic to Match field.
    • Select DNS for the Application Protocols field.
    • Select the Apply button at the bottom to save the egress rule.
egress rule
Figure: Network Policy Egress Rule
  • Select Add item and configure another rule of type allow for the endpoint prefix 8.8.8.8/32. This is another Google DNS endpoint prefix.
    • Select Match Application Traffic for the Select Type of Traffic to Match field.
    • Select DNS for the Application Protocols field.
    • Select the Apply button at the bottom to save the egress rule.
  • Select Add item and configure another rule with Allow action to allow rest of all egress TCP traffic. Select the Apply button at the bottom to save the egress rule.
egress rule list
Figure: Network Policy Egress Rule
  • Select Apply to save the egress rules list.

  • Select Continue to apply the network policy configuration.

Step 1.4.4: Configure the site forward proxy policy.
  • Select Enable Forward Proxy and Mange Policies for the Manage Forward Proxy Policy field. Use the Forward Proxy Policies drop-down menu to select Add Item. Enter the configuration using the following guidelines:

    • Enter a name and select All Forward Proxies on Site for the Select Forward Proxy field.
    • Select Allowed connections for the Select Policy Rules section.
    • Select Add Item under the TLS Domains field. Select Exact Value from the drop-down list of the Enter Domain field and enter gitlab.com for the Exact Value field. Select Apply to add this domain to the TLS Domains list. Repeat this step several times using the following specifics:
    • Repeat the step above for github.com.
    • Repeat the step again for each of the following domains with the Suffix Value type.
      • gcr.io
      • storage.googleapis.com
      • docker.io
      • docker.com
      • amazonaws.com
tls doms
Figure: TLS Domains

Note: The Allowed connections option allows the configured TLS domains and HTTP URLs. Everything else is denied.

  • Select Continue to apply the forward proxy policy configuration.
Step 1.4.5: Configure static route for the inside interface towards the EKS CIDR.
  • Enable Show Advanced Fields in the Advanced Options section.
  • Select Manage Static Routes for the Manage Static Routes for Inside Network field, and then select Add Item to add a route to the List of Static Routes.
    • Select Simple Static Route for the Static Route Config Mode field.
    • Enter a route for your EKS subnet in the Simple Static Route field.
    • Select Add Item to save the route to the list.
simple static
Figure: Static Roue Configuration
  • Select Apply to return to the Ingress/Egress Gateway configuration.
  • Select Apply to return to the AWS VPC site configuration screen.
Step 1.4.6: Complete AWS VPC site object creation.
  • Select Automatic Deployment for the Select Automatic or Assisted Deployment field.
  • Select the AWS credentials created in Step 1.1 for the Automatic Deployment field.
  • Select an instance type for the node for the AWS Instance Type for Node field in the Site Node Parameters section.
  • Enter your public SSH key in the Public SSH key field. This is required to access the site once it is deployed.
ssh rsa
Figure: Automatic Deployment and Site Node Parameters
  • Select Save and Exit to complete creating the AWS VPC object. The AWS VPC site object gets displayed.
Step 1.5: Deploy AWS VPC site.
  • Select the Apply button for the created AWS VPC site object. This will create the VPC site.
tf applied
Figure: Terraform Apply for the VPC Object
  • Select ...> Terraform Parameters. Select Apply Status tab. Copy the VPC ID from the tf_output section.
tf ouput vpc id
Figure: VPC ID from Terraform Apply Status
Step 1.6: Deploy the EKS cluster and hipster shop application in it.
Step 1.6.1: Create a terraform variables (tfvars) file.

In a new empty directory, create a Terraform file with .tfvars extension and populate it using JSON with the following values:

          {
	"aws_access_key": "foobar",
	"aws_secret_key": "foobar",
	"name": "<vpc_object_name>",
	"vpc_id": "<vpc_id>"
}

        

Note: The values in the file should match those used in the previous steps:

  • For AWS access and secret keys, use the same values that you used in steps 1.1 and 1.2.
  • For name, use the VPC object name you used in step 1.4.
  • For VPC ID, use the value you copied in step 1.5.

If the namespace has already been created on the EKS site, then add "create_namespace": 0:

          {
	"aws_access_key": "foobar",
	"aws_secret_key": "foobar",
	"name": "<vpc_object_name>",
	"vpc_id": "<vpc_id>"
	"create_namespace": 0
}

        
Step 1.6.2: Deploy EKS.

Set the $ACTION environmental variable to apply (note the $ACTION value can be plan, apply or destroy):

          export ACTION=plan

        

Run the following command to deploy the EKS cluster:

          docker run --rm -it \
	--env AWS_ACCESS_KEY_ID=\$AWS_ACCESS_KEY_ID \
	--env AWS_SECRET_ACCESS_KEY=\$AWS_SECRET_ACCESS_KEY \
	--env VAR_FILE_NAME=test.json \
	-v \${PWD}:/terraform/templates/ \
	gcr.io/solutions-team-280017/eks_only \
	/terraform/templates/\$ACTION

        

Step 2: Discover & Delegate

Discovering services in the VPC requires configuring service discovery objects for the front-end service. Also, this includes delegating the domain to Distributed Cloud Services to manage the DNS and certificates for the domain.

The following video shows the service discovery workflow:

Perform the following steps for discovering services.

Step 2.1: Create a service discovery object.

Log into the Console and select the Multi-Cloud App Connect service. Navigate to Manage > Service Discoveries, select Add Discovery, and enter the following configuration:

  • Enter a name in the Name field.
  • In the Where section,
    • Select Site for the Virtual-Site or Site or Network field.
    • Select the site you created as part of Step 1 in the in the Reference field.
    • Select the Site Local Inside Network for the Network Type field.
  • Select K8S Discovery Configuration for the Select Discovery Method field, select Configure to setup the discovery method
  • Select Kubeconfig for the Select Kubernetes Credentials field. Select Configure under the Kubeconfig field to open the secret configuration.
    • Select Textfor the blindfold secret Type, and then enter the kubeconfig downloaded as part of Step 1 in the Text field.
SecPol
Figure: Secret Encryption
  • Select Blindfold and wait until the Blindfold process is complete. Select Apply to save the secret.
SecPol
Figure: Discovery Object Configuration
  • Select Apply to save the K8s discovery configuration, and then Select Save and Exit to create the discovery object.

Verify in Console that the discovery object is created and discovered services. Select ... > Show Global Status for the discovery object to view the discovered services.

Step 2.2: Delegate your domain to F5 Distributed Cloud.

For details how to delegate your domain, see F5 Distributed Cloud Domain Delegation


Step 3: Load Balancer

An HTTP load balancer must be configured to make the frontend service externally available. As part of the HTTP load balancer, the origin pools are created that define the origin servers where the frontend service is available.

The following video shows the load balancer creation workflow:

Perform the following to configure load balancer:

Step 3.1: Create a namespace and change to it.
  • Select the Administration service.
  • Select Personal Management > My Namespaces, and select Add namespace.
add ns
Figure: Add a Namespace
  • Enter a name and select Add namespace.
  • Change to the Multi-Cloud App Connect service.
  • Select on the namespace drop-down menu and select your namespace to change to it.
changeto ns
Figure: Change to Application Namespace
Step 3.2: Create HTTP load balancer.

Select Manage > Load Balancers in the configuration menu and HTTP Load Balancers in the options. Select Add HTTP load balancer.

Step 3.2.1: Enter metadata and set basic configuration.
  • Enter a name for your load balancer in the metadata section.
  • Enter a domain name in the Domains field. Ensure that its sub-domain is delegated to Distributed Cloud Services. This example sets skg.quickstart.distribugteappsonvolt.org domain. The skg part is a prefix and the quickstart.distribugteappsonvolt.org part is a domain delegated that is already setup on this tenant.
  • Select HTTPS with Automatic Certificate for the Load Balancer Type field.
Step 3.2.2: Configure origin pool.
  • Select Add Item in the Origins section.

  • Use the Origin Pool pull-down to select Add Item.

  • In the pool creation form, enter a name for your pool in the metadata section.

  • In the Origin Servers section, select Add Item, Select K8s Service Name of Origin Server on given Sites,

  • In the Select Type of Origin Server field of Basic Configuration section, select k8s Service Name of Origin Server on given Sites.

    • Enter service name in the <servivename.k8s-namespace> format for the Service Name field. This example sets frontend.hipster as the service name.
    • Select Site for the Site or Virtual Site field and select the site you created in Step 1.
    • Select Inside Network for the Select Network on the site field.
    • Select Apply to save the origin server.
    orig pools
    Figure: Origin Pool Configuration
  • Enter 80 in the Port field.

  • Select Continue to save the origin pool.

  • Select Apply to add the origin pool to the load balancer.

Step 3.2.3: Complete load balancer creation.

Scroll down and select Save and Exit to create load balancer. The load balancer object gets displayed with TLS Info field value as DNS Domain Verification. Wait for it to change to Certificate Valid.

vh ready
Figure: Created HTTP Load Balancer

The load balancer is now ready, and you can verify it by accessing the domain URL from a browser.


Step 4: Secure App

Securing the ingress and egress traffic includes applying WAF and javascript challenge to the load balancer.

The following video shows the workflow of securing the ingress and egress:

Perform the following steps to configure WAF and javascript challenge:

Step 4.1: Configure WAF to load balancer.
  • Select the Multi-Cloud App Connect service and select the namespace you created previously.
  • Select Manage > Load Balancers from the configuration menu and select HTTP Load Balancers in the options. Select ... > Manage Configuration for the load balancer for which WAF is to be applied. Then select Edit Configuration to make changes.
  • Scroll down to the Web Application Firewall section and select Enable.
  • Use the Enable drop-down menu to select Add Item.
lb sec cfg
Figure: Security Configuration for Load balancer
  • Set a name for the WAF and select Blocking for the Enforcement Mode field. Select Continue to create WAF and apply to the load balancer.
waf
Figure: WAF Configuration
  • Select Save and Exit to save load balancer configuration.

  • Verify that the WAF is operating. Enter the following command to apply an SQL injection attack:

          
https://skg.quickstart.distributedappsonvolt.org/v=SELECT%20sqlite_version%28%29


        

The URL rejection result indicates that the WAF is operational and blocks the SQL injection attempt.

  • Inspect the WAF events from the load balancer monitoring view. Navigate to Overview > Applications to see an overview of all load balancers.
  • Scroll down to the Load Balancers section and select the load balancer previously created to get more details for that specific load balancer.
monitor waf
Figure: Load Balancer App Firewall View
Step 4.2: Configure javascript challenge for the load balancer.
  • Select the Web App & API Protection service and select the namespace you created previously.

  • Select Manage > Load Balancers > HTTP Load Balancers in the left menu. Select ... > Manage Configuration for the load balancer for which javascript challenge is to be applied.

  • Click Edit Configuration in the upper right.

  • In the Common Security Controls section, turn on the Show Advanced Fields toggle.

  • Select Javascript Challenge from the Malicious User Mitigation And Challenges drop-down menu, and then click Configure.

  • Enter 3000 and 1800 for the Javascript Delay and Cookie Expiration period fields respectively. This sets the delay to 3000 milliseconds and cookie expiration to 1800 seconds.

  • Enter a base-64 message into the Custom Message for Javascript Challenge. This example uses <p>Please wait.</p> for the message, which encodes to PHA+UGxlYXNlIHdhaXQuPC9wPg==.

Note: https://www.base64encode.org/ is a convenient site for encoding/decoding Base64 content.

jscript
Figure: Javascript Challenge Configuration
  • Select Apply to apply the javascript challenge to load balancer.
lb final
Figure: Javascript Challenge Applied to Load Balancer
  • Select Save and Exit to save load balancer configuration.

  • Verify that the javascript challenge is applied. Enter your domain URL from a browser. The javascript challenge default page appears for 3000 milliseconds before loading the hipster website.

  • For more information on creating a JavaScript challenge, see Configure JavaScript Challenge.


Concepts