Configure Global Log Receiver
On This Page:
Objective
This guide provides instructions on how to enable sending of your tenant logs from F5® Distributed Cloud Regional Edge (RE) Sites to an external log collection system. The sent logs include all system and application logs of your tenant. This also includes logs of all Customer Edge (CE) Sites of that tenant. For conceptual information about logging, see Logs.
A folder gets created each day in your log collection system and in that folder, a folder for each hour gets created. The logs are sent for every 5 minutes into the hourly folder and are stored in a compressed gzip
file.
Global log receiver supports sending the logs for the following log collection systems:
- Amazon S3
- Generic HTTP or HTTPs server
- Datadog
- Splunk
- AWS Cloudwatch
- Kafka
- Azure Event Hubs
- Azure Blob Storage
Note: Currently, global log receiver supports only sending the request (access) logs, security events, and audit logs for all HTTP Load Balancers and sites.
Using the instructions provided in this guide, you can configure a Global Log Receiver in the F5® Distributed Cloud Console (Console) to enable sending of logs to an external log collection system.
Prerequisites
- A valid Account is required.
Note: If you do not have an account, see Create an Account.
-
An external log collection system reachable publicly.
-
The following IP ranges are required to be added to your firewall's allow-list:
- 193.16.236.68/32
- 185.160.8.156/32
Enable Global Receiver of Logs
You can configure global log receiver in either system namespace or in shared namespace. In case of configuring in shared namespace, you can configure to send from either shared namespace or all namespaces or specific list of namespaces. If you are configuring in system namespace, you can only send logs from system namespace.
The example shown in this guide creates a global log receiver object in the Console in system namespace for sending the logs to the external log collection system.
Perform the following in the F5® Distributed Cloud Console:
Step 1: Start creating a global log receiver.
-
In the Console home page, select
Cloud and Edge Sites
service orShared Configuration
service. -
Select
Management
>Log Management
in the primary navigation menu forCloud and Edge Sites
service. If it isShared Configuration
, selectManage
>Global Log Receiver
. -
Select
Global Log Receiver
in case ofCloud and Edge Sites
service.
- Select
Add Global Log Receiver
button.
Step 2: Configure global log receiver properties.
Do the following in the Global Log Receiver
section:
-
Enter a name in the metadata section. Optionally, set labels and add a description.
-
Select
Request Logs
orSecurity Events
for theLog Type
field. The request logs are set by default. -
In case of
Cloud and Edge Sites
service, select logs from current namespace for theLog Message Selection
field. This is also the default option. -
In case of
Shared Configuration
, you can select one of the following options:Select logs from current namespace
- send logs from the shared namespace.Select logs from all namespaces
- send logs from all namespaces.Select logs in specific namespaces
- send logs from specified namespaces. Enter the namespace name in the displayed namespaces list. UseAdd item
button to add more than one namespace.
-
Select
S3 Receiver
for theReceiver Configuration
box. Configure following for S3 receiver:-
Enter your AWS S3 bucket name in the
S3 Bucket Name
field. -
Select
AWS Cloud Credentials
box, select a cloud credentials object from the drop-down. Alternatively, you can also use theCreate new Cloud Credential
button to create new object. For instructions on creating cloud credentials, see Cloud Credentials. -
Select
AWS Region
box, select a region from the drop-down. Ensure that you select the same region in which the S3 storage is configured.
-
Note: Similarly, you can configure receivers for other systems such as HTTP(s) server, Datadog, Splunk, Azure Hub Events, or Azure Blob Storage.
Step 3: Optionally, configure advanced settings.
Advanced settings include configuring batch options and TLS. Using batch options, you can apply limits such as maximum number of messages bytes or timeout for a batch of logs to be sent to the receiver.
Select Show Advanced Fields
toggle and do the following in the Batch Options
section:
- Select
Timeout Seconds
for theBatch Timeout Options
and enter a timeout value in theTimeout Seconds
box. - Select
Max Events
for theBatch Max Events
and enter a value between 32 and 2000 in theMax Eventss
box. - Select
Max Bytes
for theBatch Bytes
and enter a value between 4096 and 1048576 in theBatch Bytes
box. Logs will be sent after the batch is size is equal to or more than the specified byte size.
Do the following for TLS
section:
- Select
Use TLS
for theTLS
field. - Select
Server CA Certificates
for theTrusted CA
field. Enter the certificates in PEM or Base64 format in theServer CA Certificates
box. - Select
Enable mTLS
formTLS config
and enter client certificate in PEM or Base64 format in theClient Certificate
box.- Select
Configure
in theClient Private Key
field, enter the secret in the box with type selected asText
. - Select
Blindfold
, wait for the operation to complete, and clickApply
.
- Select
Step 4: Complete log receiver creation.
Select Save & Exit
to complete creating the global log receiver. Verify that logs are received into your S3 bucket in AWS.
Configuring Splunk Receiver
In case of configuring Splunk receiver, according to a Splunk article, there are 2 different Splunk HEC URI:
- For Splunk Cloud customers, the standard HEC URI is: https://http-inputs-customer_stack.splunkcloud.com/services/collector
- Splunk Cloud customers do NOT need to specify port 8088, all HEC traffic goes over port 443.
- Fur customers using AWS Firehose, then you will have a second HEC URL:
https://http-inputs-firehose-customer_stack.splunkcloud.com/services/collector
- For customers running HEC on their own deployments or using the Splunk test drive instance, then port 8088 will need to be specified:
https://input-prd-uniqueid.cloud.splunk.com:8088/services/collector
In either of the scenarios, you can use the following command to validate the URL:
-
In case of Splunk Cloud, enter
%>nslookup http-inputs-<customer_stack>.splunkcloud.com
-
In case of Splunk Test Drive, enter
$ nslookup input-prd-uniqueid.cloud.splunk.com