Create Baremetal Site
On This Page:
- Objective
- Prerequisites
- Restrictions
- Minimum Hardware Requirements
- Supported Hardware
- Edge Hardware
- Server Hardware
- Distributed Cloud Services Hardware
- Create a Site Token
- Prepare a Bootable USB
- Configure the BIOS
- Install the Distributed Cloud Services Node Software
- Post-Install Node Parameter Configuration
- Register the Site
- Single-node Site Registration
- Multi-node Site Registration
- Access Site Local UI
- Concepts
Objective
This document provides instructions on how to create an F5® Distributed Cloud Services single-node or multi-node site on custom hardware, with a site software image file. For more information, see Site.
Prerequisites
-
A Distributed Cloud Services Account. If you do not have an account, see Create an Account.
-
A Distributed Cloud Services node software image file. See Images to download the software image file.
-
Resources required per node: Minimum 4 vCPUs and 14 GB RAM.
-
45 GB is the minimum amount required for storage. However, if you are deploying an F5® Distributed Cloud App Stack Site, 100 GB is the recommended minimum amount of storage.
By proceeding with the installation, download and/or access and use, as applicable, of the Distributed Cloud Services software, and/or Distributed Cloud Services platform, you acknowledge that you have read, understand, and agree to be bound by this agreement.
Restrictions
The USB allowlist is enabled by default. If you change a USB device, such as a keyboard after registration, the device will not function.
Minimum Hardware Requirements
The Distributed Cloud Services Node software is designed to work on most commodity hardware.
The following minimum requirements will assist you when choosing a commodity hardware for Distributed Cloud Services Node deployments:
Memory | Networking | USB | HDMI | Disk Space |
---|---|---|---|---|
Minimum: 14 GB | Minimum: 1x 1000Mb/s (Intel-based) | Minimum: 1 USB 2.0/3.0 for Imaging the host | Minimum: 1 HDMI for imaging the system. Only required if the hardware provided is not Distributed Cloud Services packaged. | Minimum: 64 GB |
Recommended: 16 GB | Distributed Cloud Services provides multiple network interface controller (NIC) support. You can use multiple NICs for a Customer Edge site. | Varies with peripheral connections (camera, etc.) | Recommended: 128 GB |
As you plan for your hardware setup, consider the following information:
- Current architecture supported is x86. Arm is currently on the roadmap.
- Memory, storage, and CPU requirements vary based on application usage on the system/host.
- Data Plane Development Kit (DPDK) support is a mandatory requirement. Refer to Intel DPDK for more information.
- USB requirements for the host vary on the number of peripheral device connections.
Supported Hardware
-
All the hardware listed below has been tested and supported by Distributed Cloud Services. Known caveats are listed in the respective sections.
-
Distributed Cloud Services nodes can only be deployed on hardware with Intel-based Ethernet cards with DPDK support. See Intel DPDK.
-
Refer to product-specific data sheets for more information.
-
Using the minimum hardware requirements listed above, you may attempt to install a Distributed Cloud Services Node, but official support will be limited.
Note: The Distributed Cloud Services Node software is certified to run on the hardware specified in the following sections. If you want to certify Distributed Cloud Services Node software for your specific hardware setup, email
sales@cloud.f5.com
.
Edge Hardware
Vendor | Model | Processor | Memory | Networking | Storage | USB | HDMI | Graphics | Input Voltage |
---|---|---|---|---|---|---|---|---|---|
Intel | NUC7i7DNKE | 1.9 GHz Intel Core i7-8650U quad-core processor | 32 GB DDR4 SO-DIMM RAM 2400 MHz | 1x Intel 10/100/1000 Gigabit Ethernet | 1 TB SSD SATA III | 4x USB 3.0 | 2x HDMI 2.0 | Dual HDMI 2.0a, 4-lane eDP 1.4 | 12-24 VDC |
Intel | NUC8i3BEH | Intel® Core™ i3-8109U Processor (4M Cache, up to 3.60 GHz) | 32 GB DDR4-2400 1.2V SO-DIMM | 1x Intel 10/100/1000 Gigabit Ethernet | 1 TB M.2 SSD SATA | 5x USB 3.1 Gen 2 3x USB 2.0 | 1x HDMI 2.0a | Intel Iris™ Plus Graphics 655 or Intel UHD Graphics 620 | 12-19 V DC |
Fitlet2 | E3950 | Intel Atom™ Processor x7 Series E3950 1.6GHz to 2GHz | 1x SO-DIMM 204-pin DDR3L Non-ECC DDR3L-1866 (1.35V) Up to 16GB | 2x GbE LAN ports (RJ-45), LAN1: Intel I211 GbE controller, LAN2: Intel I211 GbE controller | 1x M.2 M-Key 2242/2260* on board *M.2 2280 optional on some facet cards | 2x USB 3.0 and 2x USB 2.0 | HDMI 1.4 3840x2160 @30Hz | Intel® HD Graphics 505 Dual display mode supported | Unregulated 7 – 20VDC* input |
Note: Fitlet2 interface naming is reversed. Eth2 = eth0 (WAN/Site Local Interface) and Eth1 = (LAN/Site Local Inside Interface).
Server Hardware
Vendor | Model | Processor | Memory | Networking | Storage | USB | RAID | HDD | Input Voltage |
---|---|---|---|---|---|---|---|---|---|
Kingstar | SYS-1029U-TN10RT | Intel Xeon | DDR4-2666 32GB x 12 (384GB+) | Intel XXV710 (10/25G) | SSD NVMe (1TB, 4TB) | ||||
Dell | PowerEdge R640 | Intel Xeon Gold 6230N(20 Core/2.3GHZ/27.5MB Cache/HT)x2 | 384GB:32GB*12 - DIMM | Intel X710 10G SFP + x2 Port + Intel i350 1G Base-T x2 Port Add-on: Intel X710 10G SFP+x2 Port PCIe NIC x1 or Intel XL710 40G QSFP+ x2 Port PCIe NIC x2 | 960GB (mixed use SDD/Dell AG Drive) x3 (6GBPS SATA/2.5inch/HotPlug) | 2 x USB 3.0 | PERC H740P SAS RAID controller | 960GB (mixed use SSD/Dell AG Drive) x3 (6Gbps SATA/2.5inch/HotPlug) | 1100W 48VDC |
Dell | PowerEdge R650xs | Intel Xeon Gold 6336Y (24Core/2.4GHz/36MB Cache/HT) | 384GB(12 x 32GB DDR4 RDIMM,3200MT/s, Dual Rank) | Intel E810-XXV Quad Port 10/25GbE SFP28 OCP NIC 3.0; | 960GB SSD SATA Read Intensive 6Gbps 512 2.5 inches Hot-Plug AG Drive, 1 DWPD x3 | 1 x iDRAC Direct (Micro-AB USB) 2 x USB 2.0 1 x USB 3.0 | PERC H755 SAS | 960GB SSD SATA Read Intensive 6Gbps 512 2.5 inches Hot-Plug AG Drive, 1 DWPD x3 | 800W, 100-240VAC/240VDC |
Dell | PowerEdge R660 | Intel Xeon Silver 4416+(20Core/2.0GHz/37.5MB Cache/HT) | 768GB(12 x 64GB DDR5 RDIMM,4800MT/s, Dual Rank) | Intel X710-T4L Quad Port 10GbE BASE-T, OCP NIC 3.0; Intel X710-T2L Dual Port 10GbE BASE-T Adaptor, Low Profile PCIe | 960GB SSD SATA Read Intensive 6Gbps 512 2.5 inches Hot-Plug AG Drive, 1 DWPD x4 | 1 x iDRAC Direct (Micro-AB USB) 2 x USB 2.0 1 x USB 3.0 | PERC H965i, PERC H755, PERC H755N, PERC H355, HBA355i, PERC H965e | 960GB SSD SATA Read Intensive 6Gbps 512 2.5 inches Hot-Plug AG Drive, 1 DWPD x4 | 1100W |
HP | ProLiant DL360 Gen10 8SFF NC | Intel Xeon Gold 6222V (20 Core, 1.8GHZ, 27.5MB Cache) x 2 | 32GB x 12 (384GB) DIMM | 2x HPE Ethernet 10/25Gb 2-port SFP28 interfaces (Mellanox MCX4121A ConnectX-4 Lx), 1x HP Ethernet 1Gb 4-port 366T Adapter (Intel) | 2 x USB 3.0 | PERC H740P SAS RAID controller | 800W |
Distributed Cloud Services Hardware
Model | Processor | Memory | Networking | Storage |
---|---|---|---|---|
IGW5508 | Intel Atom® C3708 | 2x DDR4 ECC SODIMM 2133 Mhz, Max of 2x32 GB | Serial Bus: 1x RS232 or RS485 RS485: Up to 10 Mbps, 2-wire, half-duplex RS232: Up to 1 Mbps, 2-wire, full-duplex Modbus master & slave LAN: 4x 1000Base-T with PoE 802.3 af supported on each Wireless: Wi-Fi 11ac 2x2 MIMO Bluetooth 4.2 HS, BLE, ANT+ LTE Cat 4 (150 Mbps max DL / 50 Mbps max UL) coverage: worldwide (Supported Frequency Bands B1, B2, B3, B4, B5, B7, B8, B12, B13, B18, B19, B20, B25, B26, B28, B38, B39, B40 and B41) 3G fallback GNSS (GPS, GLONASS, BeiDou and Galileo) Field replaceable SIM | 1x M.2 2280 NVMe 1x M.2 2280 SATA |
IGW5504 | Intel Atom® C3538 | 2x DDR4 ECC SODIMM 2133 Mhz, Max of 2x32 GB | Serial Bus: 1x RS232 or RS485 RS485: Up to 10 Mbps, 2-wire, half-duplex RS232: Up to 1 Mbps, 2-wire, full-duplex Modbus master & slave LAN: 4x 1000Base-T with PoE 802.3 af supported on each Wireless: Wi-Fi 11ac 2x2 MIMO Bluetooth 4.2 HS, BLE, ANT+ LTE Cat 4 (150 Mbps max DL / 50 Mbps max UL) coverage: worldwide (Supported Frequency Bands B1, B2, B3, B4, B5, B7, B8, B12, B13, B18, B19, B20, B25, B26, B28, B38, B39, B40 and B41) 3G fallback GNSS (GPS, GLONASS, BeiDou and Galileo) Field replaceable SIM | 1x M.2 2280 SATA |
IGW5008 | Intel Atom® C3708 | 2x DDR4 ECC SODIMM 2133 Mhz, Max of 2x32 GB | Serial Bus: 1x RS232 or RS485 RS485: Up to 10 Mbps, 2-wire, half-duplex RS232: Up to 1 Mbps, 2-wire, full-duplex Modbus master & slave LAN: 4x 1000Base-T with PoE 802.3 af supported on each Wireless: Wi-Fi 11ac 2x2 MIMO Bluetooth 4.2 HS, BLE, ANT+ LTE Cat 4 (150 Mbps max DL / 50 Mbps max UL) coverage: worldwide (Supported Frequency Bands B1, B2, B3, B4, B5, B7, B8, B12, B13, B18, B19, B20, B25, B26, B28, B38, B39, B40 and B41) 3G fallback GNSS (GPS, GLONASS, BeiDou and Galileo) Field replaceable SIM | 1x M.2 2280 NVMe 1x M.2 2280 SATA |
IGW5004 | Intel Atom® C3538 | 2x DDR4 ECC SODIMM 2133 Mhz, Max of 2x32 GB | Serial Bus: 1x RS232 or RS485 RS485: Up to 10 Mbps, 2-wire, half-duplex RS232: Up to 1 Mbps, 2-wire, full-duplex Modbus master & slave LAN: 4x 1000Base-T with PoE 802.3 af supported on each Wireless: Wi-Fi 11ac 2x2 MIMO Bluetooth 4.2 HS, BLE, ANT+ LTE Cat 4 (150 Mbps max DL / 50 Mbps max UL) coverage: worldwide (Supported Frequency Bands B1, B2, B3, B4, B5, B7, B8, B12, B13, B18, B19, B20, B25, B26, B28, B38, B39, B40 and B41) 3G fallback GNSS (GPS, GLONASS, BeiDou and Galileo) Field replaceable SIM | 1x M.2 2280 SATA |
Create a Site Token
Create a site token or use an existing token. If you are configuring a multi-node site, use the same token for all nodes.
Step 1: Log into F5® Distributed Cloud Console (Console) and navigate to site tokens.
- Click
Multi-Cloud Network Connect
.

- Select
Manage
>Site Management
>Site Tokens
.
Step 2: Generate a new site token.
-
Select
Add site token
to create a new token. -
In the
Name
field, enter the token name. -
In the
Description
field, enter a description for the token. -
Select
Add site token
.

Step 3: Note down the new token.
-
Find the token previously created or choose an existing token from the list of tokens displayed.
-
Select
>
to expand the token details in JSON format and note down the value of theuid
field.

Prepare a Bootable USB
To create a bootable USB drive, you must flash the Distributed Cloud Services CE ISO image onto a USB drive. Depending on your operating system, you can use software (like Etcher) to quickly flash the USB drive with the ISO image.
Step 1: Download Etcher software based on your operating system.
-
Navigate to Etcher to download the installer file.
-
Follow the instructions to install Etcher.
Note: If you are running macOS X Catalina, you need to download the latest version of Etcher. If you do not and attempt to flash an ISO image, this terminal message appears:
Note: For more information on choosing an appropriate image for your certified hardware, see How to choose an image for your site deployment?.
“balenaEtcher” can’t be opened because Apple cannot check it for malicious software
Step 2: Download the Distributed Cloud Services Node ISO image file.
Navigate to Certified Hardware and KVM Images to download the file.
Note: A certified Distributed Cloud Services Node image software (.iso/.img) is packaged with all the required components to provision Distributed Cloud Services-based components.
Step 3: Flash the USB drive.
-
Insert a USB into your computer.
-
Open Etcher.

- Follow the instructions to begin the flash process.
Configure the BIOS
Prior to installing the Distributed Cloud Services Node software, you need to configure your system’s BIOS menu. The BIOS provides the basic functions needed to boot your system and enables you to access specific hardware components.
Step 1: Invoke the BIOS menu.
To enter the BIOS menu, press a key or key combination (Delete or the F2 key) immediately after turning on your computer.
Note: The key combination differs from manufacturer to manufacturer. Typically, the start screen on the computer displays a message, stating which key to press to enter the BIOS menu.

Step 2: Select boot device.
Within the BIOS setup menu, you can select which devices will check in which sequence for a bootable operating system in the Boot
tab.
Note: The possible choices usually include the internal hard disks, the CD/DVD-ROM drive and mass storage devices, such as USB sticks or external hard disks. The installation media in most of the scenarios is a flashed USB drive with the Distributed Cloud Services Node image.
- Select the USB drive to boot first from the boot menu options.

Most BIOS versions allow you to call up a boot menu on system startup in which you select from which device the computer starts for the current session. If this option is available, the BIOS usually displays a short message like “press F12 for boot menu” on system startup. The actual key used to select this menu varies from system to system. Commonly used keys are F12, F11, or F8. Choosing a device from this menu does not change the default boot order of the BIOS. In other words, you can start once from a USB drive while having configured the internal hard disk as the primary boot device.

Step 3: Check and fix potential issues.
If you have no PS/2-style keyboard, but only a USB model, you may need to enable legacy keyboard emulation in your BIOS menu to use your keyboard in the bootloader menu. Modern systems do not have this issue.
If your keyboard does not work in the bootloader menu, consult your system manual and look in the BIOS for “Legacy keyboard emulation” or “USB keyboard support” options.
Install the Distributed Cloud Services Node Software
After the boot order is configured to USB, a prompt loads with information to install the software.
- Use the keyboard arrows to select
INSTALL
.

- Press
Enter
. The installation begins and displays the status of the process.

Note: If no selection is made, the installation proceeds with the default values.
Note: If you install the Distributed Cloud Services Node software on hardware with CentOS, the node software may fail to install. If this occurs, perform the following steps:
Check the disk name using the
fdisk -l
command.Copy the disk that is affected using the
dd
command.
This is an example command:
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=32
Post-Install Node Parameter Configuration
At any point, you can log in to the node via SSH using the admin
user account and Volterra123
password to configure parameters.
Note: If you did not log in previously, you will be prompted to update the default password for the
admin
user account. Follow the instructions to update the default password.
Step 1: Log into the node using your credentials.
The login shell loads with different options to select for configuration.
- Press the
Tab
key to select parameters to configure.

Step 2: Verify the configuration.
-
Select
get-config
. -
Confirm the settings are correct.
Step 3: Start network configuration.
You have the option to configure the network.
-
Select
configure-network
. -
Follow the prompts to configure network settings.
-
Optionally, enter
Y
forDo you want to configure wifi?
.
Note: You cannot change an IP address for a registered node for a multi-node site. You must use fixed IP addresses or DHCP addresses with a fixed lease. When configuring an HTTP proxy server, IPSec tunneling is not supported.
Step 4: Enter the SSID and password for your Wi-Fi network.
Configuring the network is optional. If you want to apply static configuration, then this option can be used.
Step 5: Configure the main options.
-
Press the
Tab
key to select theconfigure
option. -
Enter a cluster name.
-
Enter the registration token.
-
Enter a hostname. The option is set to
master-0
by default.
Note: Ensure that hostnames are unique if you are installing nodes for a multi-node site.
-
Enter the longitude and latitude information.
-
Select
kvm-voltmesh
for the certified hardware.
Note: You must first perform network configuration using the
configure-network
option before setting the other fields using theconfigure
option in case you are applying static configuration for the network. Also, note that changing the assigned IP address after the successful registration of the node is not supported in cases of multi-node sites.
Step 6: Confirm configuration.
Enter Y
to confirm configuration.
Step 7: Verify configuration status.
-
Press the
Tab
key to select thehealth
option. -
Verify your Wi-Fi configuration and registration status.

Note: You can select the
factory-reset
option to perform a configuration reset and repeat the registration process again per the instructions below.

Note: If you use an NTP server, ensure that the server is reachable. Else leave the NTP server configuration empty so that the F5 Distributed Cloud NTP servers are used.
Register the Site
After the Distributed Cloud Services Node is installed, it must be registered as a site in Console.
Note: The USB allowlist is enabled by default. If you change a USB device, such as a keyboard after registration, the device will not function.
Single-node Site Registration
Step 1: Navigate to the site registration page.
-
Log into Console.
-
Click
Multi-Cloud Network Connect
. -
Click
Manage
>Site Management
>Registrations
.
Step 2: Complete site registration.
-
Under
Pending Registrations
, find your node name and then click the blue checkmark. -
In the form that appears, fill in all required fields with the asterisk symbol (
*
). -
Enter a latitude value and a longitude value.
-
Enter other configuration information, if needed.
-
Click
Save and Exit
.
Step 3: Check Site status and health.
It may take a few minutes for the site registration information to update.
-
Click
Sites
>Site List
. -
Click on your site name. The
Dashboard
tab appears, along with many other tabs to inspect your site. -
Click the
Site Status
tab to verify the following:-
The
Update Status
field has aSuccessful
value for theF5 OS Status
section. -
The
Update Status
field has aSuccessful
value for theF5 Software Status
section. -
The
Tunnel status
andControl Plane
fields under theRE Connectivity
section haveup
values.
-
Multi-node Site Registration
Step 1: Navigate to the site registration page.
-
Log into Console.
-
Click
Multi-Cloud Network Connect
.

- Click
Manage
>Site Management
>Registrations
.
Step 2: Accept the registration requests.
Registration requests are displayed in the Pending Registrations
tab.
-
Click
Accept
to accept the registration requests from themaster-0
,master-1
, andmaster-2
nodes. -
Enter the same values for the following parameters for all the registration requests:
-
In the
Cluster name
field, enter a name for the cluster. Ensure that all master nodes have the same name. -
In the
Cluster size
field, enter3
. Ensure that all master nodes have the same cluster size.
-
-
Enter all mandatory fields marked with the asterisk (
*
) character.
Step 3: Check site status and health.
It may take a few minutes for the site health and connectivity score information to update.
-
Click
Sites
>Site List
. -
Click on your site name. The
Dashboard
tab appears, along with many other tabs to inspect your site. -
Click the
Site Status
tab to verify the following:-
The
Update Status
field has aSuccessful
value for theF5 OS Status
section. -
The
Update Status
field has aSuccessful
value for theF5 Software Status
section. -
The
Tunnel status
andControl Plane
fields under theRE Connectivity
section haveup
values.
-
Access Site Local UI
After you create and register your site, you can access its local user interface (UI) to perform certain configuration and management functions. For more information, see Site Local UI Usage.