Secure Kubernetes Gateway
Objective
This guide provides instructions on how to create a Secure Kubernetes Gateway using F5® Distributed Cloud Console and F5 Distributed Cloud Mesh.
The steps to create Secure Kubernetes Gateway are:
Figure: Steps to Deploy Secure Kubernetes Gateway
The following images shows the topology of the example for the use case provided in this document:
Figure: Secure Kubernetes Gateway Sample Topology
Using the instructions provided in this guide, you can deploy a Secure K8s Gateway in your Amazon Virtual Private Cloud (Amazon VPC), discover the cluster services from that VPC, setup load balancer for them, and secure those services with javascript challenge and Web Application Firewall (WAF).
The example shown in this guide deploys the Secure K8s Gateway on a single VPC for an application called as hipster-shop deployed in an EKS cluster. The application consists of the following services:
- frontend
- cartservice
- productcatalogservice
- currencyservice
- paymentservice
- shippingservice
- emailservice
- checkoutservice
- recommendationservice
- adservice
- cache
Prerequisites
-
F5 Distributed Cloud Console SaaS account.
Note: If you do not have an account, see Create an Account.
-
Amazon Web Services (AWS) account.
Note: This is required to deploy a Distributed Cloud site.
-
F5 Distributed Cloud vesctl utility.
Note: See vesctl for more information.
-
Docker.
-
Self-signed or CA-signed certificate for your application domain.
-
AWS IAM Authenticator.
Note: See IAM Authenticator Installation for more information.
Configuration
The use case provided in this guide sets up a Distributed Cloud site as Secure K8s Gateway for the ingress and egress traffic for the K8s cluster deployed in the Amazon VPC. The example web application has a front-end service to which all the user requests are sent and it redirects to other services accordingly. The following actions outline the activities in setting up the Secure K8s Gateway:
-
The frontend service of the application needs to be externally available. Therefore, a HTTPS load balancer is created with origin pool pointing to the frontend service on the EKS cluster.
-
The domain of the load balancer is delegated to Distributed Cloud Services in order to manage the domains DNS and TLS certificates.
-
Security policies are configured to block egress communication to
Google DNS
for DNS query resolution and allow github, docker, AWS, and other required domains for code repository management. -
A WAF configuration is applied to secure the externally available load balancer VIP.
-
Javascript challenge is set for the load balancer to apply further protection from attacks such as botnets.
Note: Ensure that you keep the Amazon Elastic IP VIPs ready for later use in configuration.
Step 1: Deploy & Secure Site
The following video shows the site deployment workflow:
Perform the following steps to deploy a Distributed Cloud site as the Secure K8s Gateway in your VPC.
Step 1.1: Log into the Console and create cloud credentials object.
- Select the
Multi-Cloud Network Connect
service. - Select
Manage
>Site Management
>Cloud Credentials
in the configuration. SelectAdd Cloud Credentials
. - Enter a name for your credentials object and select
AWS Programmatic Access Credentials
for theSelect Cloud Credential Type
field. - Enter your AWS access key ID in the
Access Key ID
field.
Figure: Credentials Meta and AWS Key Configuration
Step 1.2: Configure AWS secret access key.
- Select
Configure
under theSecret Access Key
field. - Enter your AWS secret access key into the
Type
field. Ensure that theText
radio button is selected.
Figure: Secret Key Configuration
-
Select
Blindfold
to encrypt your secret using F5 Distributed Cloud Blindfold. TheBlindfold configured
message gets displayed. -
Select
Apply
.
Step 1.3: Complete creating the credentials.
Select Save and Exit
to complete creating the AWS cloud credentials object.
Step 1.4: Start creating AWS VPC site object.
- Select
Manage
>Site Management
>AWS VPC Sites
in the configuration menu. SelectAdd AWS VPC Site
. - Enter a name for your VPC object in the metadata section.
Step 1.4.1: Configure site type selection.
- Go to
Site Type Selection
section` and perform the following:- Select a region in the
AWS Region
drop-down field. This example selectsus-east-2
. - Select
New VPC Parameters
for theVPC
field. Enter a name in theAWS VPC Name
field or enterAutogenerate VPC Name
. - Enter the CIDR in the
Primary IPv4 CIDR blocks
field. - Select
Ingress/Egress Gateway (Two Interface)
for theSelect Ingress Gateway or Ingress/Egress Gateway
field.
- Select a region in the
Figure: AWS VPC Site Configuration of Site Type
Step 1.4.2: Configure ingress/egress gateway nodes.
-
Select
Configure
to open the two-interface node configuration wizard. -
Select
Add Item
and enter the configuration using the following guidelines.- Select an option for the
AWS AZ name
field that matches the configuredAWS Region
. - Select
New Subnet
for theWorkload Subnet
field in theWorkload Subnet
section. Enter a subnet address in theIPv4 Subnet
field. - Similarly configure a subnet address for the
Subnet for Outside Interface
section. - Select
Apply
to complete the two-interface node configuration.
- Select an option for the
Figure: Ingress/Egress Gateway Nodes Configuration
Step 1.4.3: Configure the site network firewall policy.
-
Go to
Site Network Firewall
section and selectActive Firewall Policies
for theManage Firewall Policy
field. Use theFirewall Policy
drop-down menu to selectAdd Item
. Enter the configuration using the following guidelines:- Enter a name and enter a CIDR for the
IPv4 Prefix List
field. This CIDR should be within the CIDR block of the VPC.
- Enter a name and enter a CIDR for the
Figure: Network Policy Endpoint Subnet
- Select
Configure
underIngress Rules
in theConnections To Policy Endpoints
section. - Select `Add Item' to add an ingress rule, and enter the following configuration:
- Enter a name for the
Rule Name
field and selectAllow
for theAction
field. - Select
Any Endpoint
for theSelect Other Endpoint
field. - Select
Match Protocol and Port Ranges
for theSelect Type of Traffic to Match
field. - Select
TCP
for theProtocol
field. - Select
Add item
underList of Port Ranges
, and then enter a port range. - Select the
Apply
button at the bottom to save the ingress rule, and then selectApply
to complete the ingress rules configuration.
- Enter a name for the
Figure: Network Policy Ingress Rule
- Select
Configure
underEgress Rules
in theConnections From Policy Endpoints
section. - Select
Add Item
to add an egress rule to theEgress Rules
list, and enter the following configuration:- Enter a name for the
Rule Name
field and selectDeny
for theAction
field. This example configures a deny rule for Google DNS query traffic. - Select
IPv4 Prefix List
for theSelect Other Endpoint
field and enter8.8.4.4/32
for theIPv4 Prefix List
field. - Select
Match Application Traffic
for theSelect Type of Traffic to Match
field. - Select
DNS
for theApplication Protocols
field. - Select the
Apply
button at the bottom to save the egress rule.
- Enter a name for the
Figure: Network Policy Egress Rule
- Select
Add item
and configure another rule of typeallow
for the endpoint prefix8.8.8.8/32
. This is another Google DNS endpoint prefix.- Select
Match Application Traffic
for theSelect Type of Traffic to Match
field. - Select
DNS
for theApplication Protocols
field. - Select the
Apply
button at the bottom to save the egress rule.
- Select
- Select
Add item
and configure another rule withAllow
action to allow rest of all egress TCP traffic. Select theApply
button at the bottom to save the egress rule.
Figure: Network Policy Egress Rule
-
Select
Apply
to save the egress rules list. -
Select
Continue
to apply the network policy configuration.
Step 1.4.4: Configure the site forward proxy policy.
-
Select
Enable Forward Proxy and Mange Policies
for theManage Forward Proxy Policy
field. Use theForward Proxy Policies
drop-down menu to selectAdd Item
. Enter the configuration using the following guidelines:- Enter a name and select
All Forward Proxies on Site
for theSelect Forward Proxy
field. - Select
Allowed connections
for theSelect Policy Rules
section. - Select
Add Item
under theTLS Domains
field. SelectExact Value
from the drop-down list of theEnter Domain
field and entergitlab.com
for theExact Value
field. SelectApply
to add this domain to the TLS Domains list. Repeat this step several times using the following specifics: - Repeat the step above for
github.com
. - Repeat the step again for each of the following domains with the
Suffix Value
type.gcr.io
storage.googleapis.com
docker.io
docker.com
amazonaws.com
- Enter a name and select
Figure: TLS Domains
Note: The
Allowed connections
option allows the configured TLS domains and HTTP URLs. Everything else is denied.
- Select
Continue
to apply the forward proxy policy configuration.
Step 1.4.5: Configure static route for the inside interface towards the EKS CIDR.
- Enable
Show Advanced Fields
in theAdvanced Options
section. - Select
Manage Static Routes
for theManage Static Routes for Inside Network
field, and then selectAdd Item
to add a route to theList of Static Routes
.- Select
Simple Static Route
for theStatic Route Config Mode
field. - Enter a route for your EKS subnet in the
Simple Static Route
field. - Select
Add Item
to save the route to the list.
- Select
Figure: Static Roue Configuration
- Select
Apply
to return to the Ingress/Egress Gateway configuration. - Select
Apply
to return to the AWS VPC site configuration screen.
Step 1.4.6: Complete AWS VPC site object creation.
- Select
Automatic Deployment
for theSelect Automatic or Assisted Deployment
field. - Select the AWS credentials created in Step 1.1 for the
Automatic Deployment
field. - Select an instance type for the node for the
AWS Instance Type for Node
field in theSite Node Parameters
section. - Enter your public SSH key in the
Public SSH key
field. This is required to access the site once it is deployed.
Figure: Automatic Deployment and Site Node Parameters
- Select
Save and Exit
to complete creating the AWS VPC object. The AWS VPC site object gets displayed.
Step 1.5: Deploy AWS VPC site.
- Select the
Apply
button for the created AWS VPC site object. This will create the VPC site.
Figure: Terraform Apply for the VPC Object
- Select
...
>Terraform Parameters
. SelectApply Status
tab. Copy the VPC ID from thetf_output
section.
Figure: VPC ID from Terraform Apply Status
Step 1.6: Deploy the EKS cluster and hipster shop application in it.
Step 1.6.1: Create a terraform variables (tfvars) file.
In a new empty directory, create a Terraform file with .tfvars extension and populate it using JSON with the following values:
{
"aws_access_key": "foobar",
"aws_secret_key": "foobar",
"name": "<vpc_object_name>",
"vpc_id": "<vpc_id>"
}
Note: The values in the file should match those used in the previous steps:
- For AWS access and secret keys, use the same values that you used in steps 1.1 and 1.2.
- For name, use the VPC object name you used in step 1.4.
- For VPC ID, use the value you copied in step 1.5.
If the namespace has already been created on the EKS site, then add "create_namespace": 0
:
{
"aws_access_key": "foobar",
"aws_secret_key": "foobar",
"name": "<vpc_object_name>",
"vpc_id": "<vpc_id>"
"create_namespace": 0
}
Step 1.6.2: Deploy EKS.
Set the $ACTION
environmental variable to apply
(note the $ACTION
value can be plan
, apply
or destroy
):
export ACTION=plan
Run the following command to deploy the EKS cluster:
docker run --rm -it \
--env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
--env AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--env VAR_FILE_NAME=test.json \
-v ${PWD}:/terraform/templates/ \
gcr.io/solutions-team-280017/eks_only \
/terraform/templates/$ACTION
Step 2: Discover & Delegate
Discovering services in the VPC requires configuring service discovery objects for the front-end service. Also, this includes delegating the domain to Distributed Cloud Services to manage the DNS and certificates for the domain.
The following video shows the service discovery workflow:
Perform the following steps for discovering services.
Step 2.1: Create a service discovery object.
Log into the Console and select the Multi-Cloud App Connect
service. Navigate to Manage
> Service Discoveries
, select Add Discovery
, and enter the following configuration:
- Enter a name in the
Name
field. - In the
Where
section,- Select
Site
for theVirtual-Site or Site or Network
field. - Select the site you created as part of Step 1 in the in the
Reference field
. - Select the
Site Local Inside Network
for theNetwork Type
field.
- Select
- Select
K8S Discovery Configuration
for theSelect Discovery Method
field, selectConfigure
to setup the discovery method - Select
Kubeconfig
for theSelect Kubernetes Credentials
field. SelectConfigure
under theKubeconfig
field to open the secret configuration.- Select
Text
for the blindfold secretType
, and then enter the kubeconfig downloaded as part of Step 1 in theText
field.
- Select
Figure: Secret Encryption
- Select
Blindfold
and wait until the Blindfold process is complete. SelectApply
to save the secret.
Figure: Discovery Object Configuration
- Select
Apply
to save the K8s discovery configuration, and then SelectSave and Exit
to create the discovery object.
Verify in Console that the discovery object is created and discovered services. Select ...
> Show Global Status
for the discovery object to view the discovered services.
Step 2.2: Delegate your domain to F5 Distributed Cloud.
For details how to delegate your domain, see F5 Distributed Cloud Domain Delegation
Step 3: Load Balancer
An HTTP load balancer must be configured to make the frontend service externally available. As part of the HTTP load balancer, the origin pools are created that define the origin servers where the frontend service is available.
The following video shows the load balancer creation workflow:
Perform the following to configure load balancer:
Step 3.1: Create a namespace and change to it.
- Select the
Administration
service. - Select
Personal Management
>My Namespaces
, and selectAdd namespace
.
Figure: Add a Namespace
- Enter a name and select
Add namespace
. - Change to the
Multi-Cloud App Connect
service. - Select on the namespace drop-down menu and select your namespace to change to it.
Figure: Change to Application Namespace
Step 3.2: Create HTTP load balancer.
Select Manage
> Load Balancers
in the configuration menu and HTTP Load Balancers
in the options. Select Add HTTP load balancer
.
Step 3.2.1: Enter metadata and set basic configuration.
- Enter a name for your load balancer in the metadata section.
- Enter a domain name in the
Domains
field. Ensure that its sub-domain is delegated to Distributed Cloud Services. This example setsskg.quickstart.distribugteappsonvolt.org
domain. Theskg
part is a prefix and thequickstart.distribugteappsonvolt.org
part is a domain delegated that is already setup on this tenant. - Select
HTTPS with Automatic Certificate
for theLoad Balancer Type
field.
Step 3.2.2: Configure origin pool.
-
Select
Add Item
in theOrigins
section. -
Use the
Origin Pool
pull-down to selectAdd Item
. -
In the pool creation form, enter a name for your pool in the metadata section.
-
In the
Origin Servers
section, selectAdd Item
, SelectK8s Service Name of Origin Server on given Sites
, -
In the
Select Type of Origin Server
field ofBasic Configuration
section, selectk8s Service Name of Origin Server on given Sites
.- Enter service name in the
<servivename.k8s-namespace>
format for theService Name
field. This example setsfrontend.hipster
as the service name. - Select
Site
for theSite or Virtual Site
field and select the site you created in Step 1. - Select
Inside Network
for theSelect Network on the site
field. - Select
Apply
to save the origin server.
Figure: Origin Pool Configuration
- Enter service name in the
-
Enter
80
in thePort
field. -
Select
Continue
to save the origin pool. -
Select
Apply
to add the origin pool to the load balancer.
Step 3.2.3: Complete load balancer creation.
Scroll down and select Save and Exit
to create load balancer. The load balancer object gets displayed with TLS Info
field value as DNS Domain Verification
. Wait for it to change to Certificate Valid
.
Figure: Created HTTP Load Balancer
The load balancer is now ready, and you can verify it by accessing the domain URL from a browser.
Step 4: Secure App
Securing the ingress and egress traffic includes applying WAF and javascript challenge to the load balancer.
The following video shows the workflow of securing the ingress and egress:
Perform the following steps to configure WAF and javascript challenge:
Step 4.1: Configure WAF to load balancer.
- Select the
Multi-Cloud App Connect
service and select the namespace you created previously. - Select
Manage
>Load Balancers
from the configuration menu and selectHTTP Load Balancers
in the options. Select...
>Manage Configuration
for the load balancer for which WAF is to be applied. Then selectEdit Configuration
to make changes. - Scroll down to the
Web Application Firewall
section and selectEnable
. - Use the
Enable
drop-down menu to selectAdd Item
.
Figure: Security Configuration for Load balancer
- Set a name for the WAF and select
Blocking
for theEnforcement Mode
field. SelectContinue
to create WAF and apply to the load balancer.
Figure: WAF Configuration
-
Select
Save and Exit
to save load balancer configuration. -
Verify that the WAF is operating. Enter the following command to apply an SQL injection attack:
https://skg.quickstart.distributedappsonvolt.org/v=SELECT%20sqlite_version%28%29
The URL rejection result indicates that the WAF is operational and blocks the SQL injection attempt.
- Inspect the WAF events from the load balancer monitoring view. Navigate to
Overview
>Applications
to see an overview of all load balancers. - Scroll down to the
Load Balancers
section and select the load balancer previously created to get more details for that specific load balancer.
Figure: Load Balancer App Firewall View
Step 4.2: Configure javascript challenge for the load balancer.
-
Select the
Web App & API Protection
service and select the namespace you created previously. -
Select
Manage
>Load Balancers
>HTTP Load Balancers
in the left menu. Select...
>Manage Configuration
for the load balancer for which javascript challenge is to be applied. -
Click
Edit Configuration
in the upper right. -
In the
Common Security Controls
section, turn on theShow Advanced Fields
toggle. -
Select
Javascript Challenge
from theMalicious User Mitigation And Challenges
drop-down menu, and then clickConfigure
. -
Enter
3000
and1800
for theJavascript Delay
andCookie Expiration period
fields respectively. This sets the delay to 3000 milliseconds and cookie expiration to 1800 seconds. -
Enter a base-64 message into the
Custom Message for Javascript Challenge
. This example uses<p>Please wait.</p>
for the message, which encodes toPHA+UGxlYXNlIHdhaXQuPC9wPg==
.
Note: https://www.base64encode.org/ is a convenient site for encoding/decoding Base64 content.
Figure: Javascript Challenge Configuration
- Select
Apply
to apply the javascript challenge to load balancer.
Figure: Javascript Challenge Applied to Load Balancer
-
Select
Save and Exit
to save load balancer configuration. -
Verify that the javascript challenge is applied. Enter your domain URL from a browser. The javascript challenge default page appears for 3000 milliseconds before loading the hipster website.
-
For more information on creating a JavaScript challenge, see Configure JavaScript Challenge.