Load Balancing and Service Mesh
Proxy & Load Balancer
F5® Distributed Cloud’s load balancing and proxy capabilities lets you control the flow of application and API traffic between services, to the internet, and from clients on the internet. In the case of F5® Distributed Cloud Mesh (Mesh), a load balancer is a proxy that is defined to be an entity that terminates an incoming TCP connections or UDP streams and initiates a new connection from the proxy. Server is referred to as an endpoint and is usually a collection or a set of endpoints that offer some service. Clients and servers may be users, machines, applications or microservices.
There are many reasons for proxy to be deployed between a client and a server:
- Connecting client and servers that are on isolated networks
- Application level (Layer 7) routing - load balancing, request routing, etc
- Security - URL filtering, application firewall, application policy, TLS offload, etc
- Network and Application resilience, testing, and fault injection
- Content caching
- TCP optimization
Figure: F5 Distributed Cloud Services Proxy
Proxy presents Virtual IP (VIP) of the server to client. Client has to discover the VIP and this is usually done by making client’s DNS server answer with VIP when client does name resolution. There may be other service discovery mechanism that clients may use, for example K8s discovery, Consul, Route53, etc. Proxy’s control plane has to make sure that client can discover server’s IP address and this is achieved by publishing the VIP. Advertising involves allocating a VIP and publishing the name and VIP mapping in the service registry e.g. DNS, Route53, or Consul. Similarly the proxy has to discover the IP addresses of the endpoints associated with the server and this is usually achieved by performing endpoint discovery. Endpoint discovery methods can be done using DNS, Consul or K8s.
Forward proxy
Forward proxy is usually configured when connecting an inside private network with an outside public network. There is no VIP in forward proxy and traffic is usually routed using a default route to the outside network. The user can configure routing between the two networks using network connector. DNS is used to discover server/endpoint IP addresses. One characteristic of the forward proxy is that the IP address of the server/endpoints is same in inside network and outside network. This allows the user to solve problems like implement URL filtering for HTTP traffic, do TLS inspection and filter based on DNS names, etc.
Figure: Forward Proxy
Forward proxy configuration is covered in the Networking section and done using Network Connector and Network Firewall.
Reverse proxy
When the server is in the inside network and the clients are in the outside network. There is no reachability to the server unless a proxy is deployed. In order to provide connectivity to the server, the proxy is configured as a “reverse” proxy and a VIP will be assigned to the server in the outside network. This allows the user to solve problems like controlling the inbound traffic and implement security controls to prevent attacks on the server.
Figure: Reverse Proxy
Subsequent discussions will not distinguish between inside and outside networks because a reverse proxy can also be deployed within the same network. This allows the user to solve the same problems like controlling traffic flow and perform load balancing based on constraints.
Figure: Reverse Proxy in the Same Network
In Mesh, a load balancer is the same as a reverse proxy and is configured to have multiple endpoints for a server. It distributes load of incoming client requests based various scheduling algorithms, for example, round robin, weighted least request, random, ring-hash etc. It selects only the active endpoints based explicit healthcheck probes or implicit healthchecks (deductions based on latency, error rate, etc of the responses). This allows the user to solve multiple problems:
- Setup routing rules to control the flow of application and API traffic
- Authentication and Authorization
- Simplify configuration of service properties like circuit-breakers, retries, timeouts, etc
- Setup traffic split to endpoints based on A/B testing, canary or staging rollouts, etc
- Implement security controls and application policy to secure the endpoints from attacks
- Provide comprehensive observability and tracing capabilities for troubleshooting
A reverse proxy in F5 Distributed Cloud Services is called “virtual host” and a virtual host needs to be assigned with a VIP. It has a list of “domains” that map to a VIP. There different types of reverse-proxies can be supported by virtual host:
-
TCP Proxy - When an incoming TCP connection is sent to a selected outgoing server, it is called “TCP proxy”. Each TCP proxy requires a combination of VIP and a TCP port
-
HTTP Proxy - When an incoming TCP connection uses HTTP protocol, the virtual host can be configured to parse HTTP header and provide the functionality of a “HTTP proxy”. Based on HOST header (in HTTP) and URL requested, traffic can be routed to different set of the endpoint servers. Selecting a set of endpoints based HTTP headers and URLs is called “application routing” and the rules are called “routes”. A set of endpoints are called “clusters”. A given virtual host can be identified based on host header and it must contain the domain that is configured in the virtual host. Hence a combination (VIP, TCP port) can be shared by multiple virtual hosts that are configured as HTTP proxies.
-
HTTPS Proxy - When an incoming TCP connection is encrypted using HTTPS protocol, the virtual host can be configured with TLS parameters that can be used to decrypt the packets and provide the functionality of “HTTPS proxy.” All the traffic routing and control features that are provided by the HTTP proxy can also be implemented here.
-
TCP Proxy with SNI - When an incoming stream is TLS but the virtual host does not have TLS parameters to decrypt the packet, then virtual host can still use the SNI information in the packet to match the host configured in the domains of the virtual host. Once virtual host is matched, incoming TCP stream is sent as is to selected server endpoint. This is called “TCP proxy with SNI”. In the case of “HTTPS proxy” and “TCP proxy with SNI” combination (VIP, port) can be shared.
Globally Distributed Load Balancer
Typically, to provide reachability for distributed applications or services, there needs to be some kind of network connectivity and routing across the sites so that the client has the ability to route to the VIP of the server. This creates all kinds of challenges from the network point of view - either private connectivity needs to be setup from the client network to the server network or the VIP has to be exposed to the public Internet.
F5 Distributed Cloud Services provide a unique solution with its distributed proxy implementation and it solves more than the problem described above:
-
Security & Connectivity - Using the F5 Distributed Cloud fabric, we create private and secure connectivity across sites and then build a distributed proxy on-top of this fabric. This gives the capability of application routing without network routing - provide zero-trust and improved security.
-
Globally Distributed Application Routing & Control - Since the proxy is distributed and the health of all the endpoints is available across multiple sites, the routing and traffic control decision can now be made based on actual health of individual endpoints across all the sites and not just the proxies front-ending the endpoints. This is especially useful for high traffic web-services.
As a result, we recommend that sites should be configured with isolated networks and applications be connected across Sites, via Fabric, using virtual-host proxies that expose only the services that need reachability.
In the diagram below, the proxy is distributed across two sites, ingress and egress sites. In the degenerate case, ingress and egress can be the same site.
Figure: Globally Distributed Load Balancer
Ingress
Set of ingress sites is decided by the “advertise policy”. In the most common case, VIP is automatically picked by the system to be the interface IP of the site. One can also specify an IP address to be used as a VIP in the advertise policy. There needs to be (protocol, port) on which the service is available. If the site has multiple networks, the network can also be specified for the VIP selection.
Finally VIP is published into service registry mechanism using the service discovery configuration and method of the site.
Client discovers the VIP for a service and initiates a connection. This connection terminates on one of the nodes in the ingress site. Depending on the load balancing strategy and policy configured for the cluster, Ingress proxy will choose (egress site, egress endpoint). Policy could be nearest site, or local only etc. It will initiate a connection to the selected egress site, egress endpoint. In case of HTTP proxy and HTTPS proxy, an pre-existing and persistent connection between ingress to egress will be reused for higher performance.
Egress
Set of the egress sites is decided by the endpoint configuration. Endpoint can specify sites and discovery method to discover endpoints servers on a given set of egress site. Configuration of endpoint discovery method can be different for each site.
Once the endpoints are discovered, egress proxy will automatically start doing health checks and if it determines that service is healthy, it will propagate egress site and endpoint information to all ingress sites using F5 Distributed Cloud's distributed control plane.
Configuring Load Balancer
Virtual host can be configured as a front-end or a load balancer for a single service or to multiple services and APIs. Every virtual host has a set of sites “where” consumers are present and a set of sites “where” providers of service or application are present. The consumers can also be configured to be present on the Internet.
The configuration of a virtual-host object requires you to configure, at a minimum, the following objects - advertise-policy, routes, clusters, and endpoints. The presence of consumers is configured using “advertise-policy” and the presence of endpoint servers is configured using “routes”, “cluster”, and “endpoint.”
Configuring Virtual Host
Virtual host is the primary configuration for a proxy and can only be configured as a reverse proxy.
The following are key things that need to be configured for the virtual host:
-
Domains - It is a list of the DNS names that is used by the client to resolve the VIP for the virtual host. The names can also be specified as partial match, for example, *.foo.com - these names have to exactly match the names on the certificates and the host headers. The match also includes the port number for non-standard http and https ports, for example *.foo.com:8080
-
Proxy Type- Following types of proxies configurations are supported
- TCP-proxy : Incoming TCP connection is terminated and a new TCP connection is started for endpoint. TCP proxy is used to connect across isolated networks and load balancing. All the data in TCP stream from incoming connection is passed transparently to outgoing connection. Hence it does not allow you to share (VIP+TCP port) across multiple virtual hosts
- UDP-proxy: Incoming UDP packets are SNAT/DNAT to outgoing interface, there is no way to share (VIP, TCP port)
- HTTP-proxy: Incoming TCP connection are terminated and the data stream is parsed according to HTTP header to get incoming HTTP request. Virtual host match happens based on HOST_HEADER information. For http virtual-host (VIP+TCP port) can be shared across multiple virtual hosts.
- HTTPS-proxy: Incoming TLS connection is terminated and appropriate SNI context is extracted along with the HTTP header. Once data is decrypted most of the rest of the configuration is similar to HTTP proxy.
- TCP-proxy with SNI: Incoming TCP connection is terminated and based on SNI context, virtual host is selected. The remaining configuration is identical to TCP proxy
- SMA proxy: this is a special mode that's defined for secret management access (SMA) by F5 Distributed Cloud services from one site to another site. The behavior is the same as TCP proxy where advertise policy uses a special network to give access to F5 Distributed Cloud's built-in internal networks. For example, it can be used to provide access to Vault to access certificates for the virtual-host
-
Routes - this is covered in detail in the “Routes” section
-
Advertise Policy - this is covered in detail in the “Advertise Policy” section
-
WAFs - this is an optional configuration and is covered in the security section.
Virtual host supports TLS and mTLS on both upstream and downstream connections:
-
Downstream (clients to virtual-host) - The virtual-host supports both server side TLS and mTLS. It needs server side certificates with principal of all domains configured in the virtual host. If the clients need mTLS then it needs trusted CA certificates.
-
Upstream (virtual-host to endpoints) - Since it is a client, for TLS it needs a trusted CA or for mTLS it needs a client certificate from a CA that the endpoints trust.
All of these are configured in virtual-host under TLS parameters and the private keys can be protected using F5 Distributed Cloud's Secrets Management and Blindfold.
Details on all the parameters that can be configured for this object is covered in the API Specification.
Configuring Routes
When the virtual-host is of type - HTTP/HTTPS, the request can be further matched based on parameters like URLs, headers, query parameters, http methods, etc. Once the request is matched, it can be sent to a specific endpoint based on the routing configuration and policy rules. This capability of performing traffic routing also allows for multiple services to be controlled using a single virtual host or a single service to appear in multiple virtual hosts.
Rules to match requests and perform traffic routing to a destination endpoint is called routes. Routes can be configured with many features/parameters:
- Route request to set of given endpoints
- Send redirect response
- Send a response
- Change the protocol
- Add or remove headers
- Timeouts, retries, etc
- Web Application Firewall
Even though, TCP proxy does not need routes, it is required to provide an empty config to keep object model consistent.
Details on all the parameters that can be configured for this object is covered in the API Specification.
Configuring Clusters
Since “Routes” direct the traffic to an endpoint (endpoints being the providers of service), there is a configuration object called cluster that represents a set (or a group) of endpoints. There are two things that can be achieved by configuring a cluster object for a set of endpoints:
-
Common configuration for the set of endpoints - for example, configuration of parameters like health-check, circuit breakers, etc.
-
Subset the endpoints into smaller groups using labels on endpoints to create multiple clusters with common configuration. This can be done for traffic routing, using “Routes” to one of these clusters based on version (canary testing) or different features (A/B), etc.
Configuration of the following capabilities is possible for a cluster:
- Endpoint Selection based on labels
- Healthcheck mechanism selection
- Load-balancer Algorithm
- Circuit Breaker
- TLS parameters
Details on all the parameters and how they can be configured for this object is covered in the API Specification.
Configuring Endpoints
Configuration of an endpoint needs to be done for the system to discover the IP address, port, and health of each individual endpoint that comprise a service. This is achieved by a combination of two things:
-
“where” is the end-point located - This can be configured by specifying either the virtual network, external network, site, or a specific site or a group of sites (virtual-site).
-
“how” to discover it in that location - This is specified by configuring a specific IP address and port for the end-point or using a discovery method like DNS, k8s, or Consul. When the discovery method is specified, then the service is selected based on service name or label selector expression.
Details on all the parameters and how they can be configured for this object is covered in the API Specification.
Configuring Advertise policy
Once the endpoints are configured, we need to specify the locations “where” this virtual host is available and this can be achieved by configuring the advertise-policy object. It is possible to configure the advertisement on a given network (eg. public internet), a particular site (to restrict the locations where the services are reachable), or on a virtual-site (a group of sites). VIP is automatically assigned by the system based on configuration of “where”
Value of “where” can be any one of the following:
-
Virtual site: configuration applied to all sites selected by virtual site and site local network type on that site. The VIP is chosen from the HOST_IP of the outside interface.
-
Virtual site, network type: configuration applied to all sites selected by virtual site and network type on those sites. The VIP chosen from the HOST_IP of the interface in the given network type.
-
Site, network type: configuration is applied on this specific site and virtual network of given type. The VIP chosen from the HOST_IP of the interface in the given network type.
-
Virtual network: configuration is applied to all sites “where” this network is present.
In most cases, “where”, protocol, and port configuration is sufficient for advertising a VIP in a given location.
In addition, the following parameters may be configured:
- Explicit VIP configuration - This explicit VIP will be part VRRP or BGP to anycast VIP.
- TLS parameters like protocol version, cipher suites, TLS certificates, trusted CA, and client certificate. For example, you need a different certificate in a region (eg. China) compared to the rest of the world.
Depending on the service discovery method configured for the site, the VIP will be registered in the service registry.
Details on all the parameters and how they can be configured for this object is covered in the API Specification.
Automatic Certificate Generation
In case of HTTPS load balancer with automatic TLS certificate, F5 Distributed Cloud generates certificates for the domains using the LetsEncrypt Automatic Certificate Management Environment (ACME) server. Automatic certificate generation is supported in any of the following 2 ways:
- Using Delegated Domains - Delegate domains to F5 Distributed Cloud and it acts as the authoritative domain server for your domains.
- Using Non-Delegated Domains - F5 Distributed Cloud creates a TXT record for the domain storing ACME challenge coming from LetsEncrypt. Add this to your DNS records.
Using Delegated Domain
F5 Distributed Cloud acts as the ACME client and obtains the certificates as per the following sequence:
Figure: Automatic Certificate Generation Sequence
The following is the list of activities for automatic certificate generation using delegated domains:
-
F5 Distributed Cloud acts as ACME client and creates a new order with the domain configured in the virtual host.
-
LetsEncrypt issues a DNS challenge for a TXT record to be created under the domain with specified text message. It also provides a nonce that F5 Distributed Cloud requires to sign with its private key pair.
-
F5 Distributed Cloud adds the required TXT record in the delegated domain and verifies that the TXT record is resolved.
Note: This requires the parent DNS domain to be configured with the NS record pointing to the delegated name servers. For instructions, see the Delegate Domain document.
-
Once the record is resolved, F5 Distributed Cloud notifies LetsEncrypt CA that it is ready to finalize the validation.
-
LetsEncrypt validates that the challenge is satisfied and verifies the signature on the nonce.
-
F5 Distributed Cloud sends a certification signing request asking LetsEncrypt CA to issue a certificate for the specified domain.
-
LetsEncrypt CA verifies the signatures on the request and issues a certificate for the domain.
The certificates issued by the automatic generation process have a validity period of 90 days. F5 Distributed Cloud performs automatic renewal before the expiry and obtains new certificates.
Using Non-Delegated Domains
The following is the list of activities for automatic certificate generation using non-delegated domains.
-
LetsEncrypt issues a DNS challenge for a TXT record to be created under the domain with specified text message
-
F5 Distributed Cloud creates a TXT record for the domain storing ACME challenge coming from LetsEncrypt.
-
Once the record is resolved, F5 Distributed Cloud notifies LetsEncrypt CA that it is ready to finalize the validation.
Note: This requires the parent DNS domain to be configured with a CNAME record using TXT value generated by F5 Distributed Cloud. For instructions, see Step 8 of the HTTP Load Balancer guide.
-
LetsEncrypt validates that the challenge is satisfied and verifies the signature on the nonce.
-
F5 Distributed Cloud sends a certification signing request asking LetsEncrypt CA to issue a certificate for the specified domain.
-
LetsEncrypt CA verifies the signatures on the request and issues a certificate for the domain.
Service discovery
Ability for a site to automatically discover service endpoints and publish VIP so that clients can discover services represented by VIPs is called service discovery. Automatic service discovery is needed since services can move, scaled out, or published independently and in an automated way without coordinating with the proxy configuration.
Service discovery method can be configured per site or virtual-site. There can be multiple methods configured for a given site and all the configured methods are used simultaneously to discover endpoints and publish VIPs.
DNS method
DNS method is most widely used method for service discovery and most networks support this. DNS method simply uses the DNS server specified to resolve names of the endpoints. There are two ways for publishing VIPs to the DNS:
-
APIs - DNS server has to provide APIs to dynamically add DNS entries. While most legacy networks may not have this ability, there are many cloud environments like AWS, Azure, and GCP support DNS APIs.
-
DNS delegation is another way DNS method can be used to publish VIPs in the site. A subdomain can be delegated to F5 Distributed Cloud Site. Site software will answer DNS queries for any name under that subdomain that is assigned to the “virtual host”
K8S method
When a Site is used as ingress/egress gateway for k8s cluster, kubernetes’ native discovery method can be used. The Site software needs k8s credentials to use the k8s APIs to discover all services and endpoints. If a Site is external to k8s cluster, then the services need to be configured as “node-port” services in k8s. For publishing the VIPs, the site can automatically add DNS entries in k8s cluster DNS.
Consul
In case Consul is used as service discovery method in the Site, F5 Distributed Cloud can use consul to discover endpoints and publish VIPs. The Site needs Consul credentials to use Consul API to discover endpoints and publish VIPs.
Configuring Service Discovery
Service Discovery requires per site configuration and depending on the type of method used, it will need different configurations:
- DNS - Credentials for the DNS API end-point in the cloud provider or the customer will have to delegate the DNS to the Site.
- Consul or k8s discovery - Credentials for k8s or Consul.
Usually, a site will have all the services use the same discovery method. In some situations, multiple service discovery methods may be configured on the same site or there may be multiple configuration objects of same method. For example, there may be multiple k8s clusters per site and they may have different credentials. Service discovery configuration object uses the concept of “where” to apply the discovery configuration to sites.
Details on all the parameters and how they can be configured for this object is covered in the API Specification.
Service Mesh
Applications are typically composed of many services that are represented by more than one virtual hosts. It will be desirable to group these virtual hosts in order to better understand service interaction and observability across the entire application. By collecting metrics from various services and its client/server interactions, the system is able to create a service graph. In addition, using the access logs and machine learning, the system can identify API endpoints across the service mesh graph - per virtual host, per service, and for each edge of the graph (source and destination). For each of these API endpoints, the system is able to apply AI and statistical algorithms for anomaly detection, probability distribution, and performance metrics.
Configuring Service Mesh
Collection of virtual host for service mesh can be created by assigning same known label “ves.io/app_type” to all virtual hosts that need to be in the collection.
You can also use the ves.io/app_type
label to add the Virtual Kubernetes (vK8s). The value for this can be the name of the application you are deploying or the namespace name. This can be set in the following:
- Virtual host
- vk8s service
- vk8s container (use when the container does not have service associated with it)
The following is an example label for launching vK8s resources:
"labels": {
"ves.io/app_type": "bookinfo-japan"
}
Note: The following apply:
- The default value for the
ves.io/app_type
label is the namespace name.- In case of a standalone pod such as a traffic generator, set this label for both the pod and the service for the service mesh to display accurate service graph.
Canary Rollout and A/B Testing
Canary and A/B rollout techniques are used to control the introduction of newer versions (canary) or newer features (A/B) to a small percentage of user-traffic and depending on the results, gradually increase the percentage of traffic while phasing out the other.
Using virtual-host and appropriate configuration of the “Routes”, “Clusters”, and “Endpoints”, it is possible to send traffic to different versions of the service using very simple configuration within the distributed load balancer. There are two common cases that can be achieved using this:
-
Canary Testing - Client sends requests to a service and 5% of the calls go to a newer version of the service and the remaining 95% go to the current version.
-
A/B Testing - Client sends requests to a service and 5% of the calls go to a newer version
The example below shows what can be done for Canary rollout using “Routes”, “Clusters” and “Endpoint” definition:
Route1 match on URL1, Header2
For subset with version-10 go to clusterA 95% traffic
For subset with version-11 go to clusterA 5% traffic
ClusterA
List of endpoints
...
Subset version-10 ( label expression 1)
Subset version-11 ( label expression 2)
Endpoint
Discover endpoints from k8s & use labels on endpoints to classify into one of the subset
On this page:
- Proxy & Load Balancer
- Forward proxy
- Reverse proxy
- Globally Distributed Load Balancer
- Ingress
- Egress
- Configuring Load Balancer
- Configuring Virtual Host
- Configuring Routes
- Configuring Clusters
- Configuring Endpoints
- Configuring Advertise policy
- Automatic Certificate Generation
- Using Delegated Domain
- Using Non-Delegated Domains
- Service discovery
- DNS method
- K8S method
- Consul
- Configuring Service Discovery
- Service Mesh
- Configuring Service Mesh
- Canary Rollout and A/B Testing