How to deploy Virtuozzo Infrastructure on OVHcloud Bare Metal
This guide describes how to deploy Virtuozzo Infrastructure on an OVHcloud Bare Metal instance.
Network port considerations
For deploying Virtuozzo Infrastructure, it is recommended to have two high-speed interfaces (25 Gbps) for storage traffic that are bonded in LACP. This will provide both redundancy and load sharing.

In this example, two ports with 25 Gbps are assigned inside the vRack, and two ports with 1 Gbps are assigned to the public network. In this case, Virtuozzo Infrastructure will use the vRack-based ports only.
vRack allocation
The OVHcloud vRack (virtual rack) allows multiple servers, IP subnets, cloud instances, and other, to be grouped together in one logical unit. VLAN-based private communication between bare metal servers is not possible without the vRack, thus Virtuozzo Infrastructure needs to have it assigned.

In this example, the vRack includes five servers in a single project, and there are two IP subnets that can be assigned to the vRack.
Server location recommendations
OVHcloud Bare Metal instances can be gained from multiple OVHcloud data centers. These data centers are located considerably distant from each other. Though they are connected via fiber links, node-to-node latency between them is around 400 microseconds, while typically it is less than 20 microseconds inside a single zone. Thus, it is recommended to place nodes in a single data center, if possible.
Such data infrastructure allows deploying data center-based redundancy schemes. In this case, data redundancy is achieved by placing replicas (or erasure codes) in different data centers for each write operation, but high latency decreases storage performance.
The server location is displayed in the OVHcloud Control Panel.

Let’s take a look at the latency values measured in the Paris data center from multiple zones:

Latency inside the same zone is less than 20 microseconds, while it varies significantly between zones, as expected.
These values are gathered from servers located in the 3-AZ offers. Keep in mind that latency values in other OVHcloud data centers will vary. It is worth noting that the observed 400-microsecond latency between distant data centers is quite a good (low) value, compared to other offerings in the market. However, to reach the lowest possible latency values, we recommend using servers in the same data center location.
vRack construction and VLAN settings
OVHcloud allows servers inside the vRack to communicate with each other by using VLANs. Ports inside the vRack are tagged and allowed for all VLANs. Servers can be configured with arbitrary VLAN numbers, there is no need to configure any VLANs in the OVHcloud Control Panel once ports are inside the vRack. IP addresses that will be used between servers can be assigned by choice. Tagged VLANs are isolated entities enclosed inside the vRack that have no outside communication. Only one VLAN, VLAN 0, is untagged and can communicate with the outside world. IP subnets assigned to the vRack are assigned to VLAN 0. It is not possible to assign a public IP subnet to a tagged VLAN in OVHcloud.
Such a structure does not meet the Virtuozzo Infrastructure requirements. In this case, we recommend using a public network with a public IP subnet that has its own tagged VLAN.
For IP subnets inside VLANs, the setup requires additional components. Instead of having public IP subnets assigned to the vRack, they need to be assigned to an instance, which will be used as a router. This instance with ports inside the vRack will act as a simple router to forward IP subnets to be used in certain VLANs inside the vRack.
Here is an example of the required architecture:

The router instance has two external ports and ports assigned to the vRack. It routes the public subnets into the VLANs inside the vRack. All public IP subnets are assigned to the router instance only.
The router instance can be one of the bare metal servers inside the vRack or a virtual machine that performs routing, with interfaces inside the vRack assigned to appropriate VLANs. The first option is more expensive but provides the best routing performance.
Network planning for Virtuozzo Infrastructure
In our sample architecture, we use four VLANs configured on top of a bond with the LACP bonding inside the vRack:
VLAN A is used for the API and also can be deployed as a reserved IP space. In this case, however, the routing instance needs to perform network address translation (NAT) for the node addresses and, if high availability is enabled, for the virtual IP address of the management node.
VLAN B is the public IP address space used by consumers of the cloud platform formed by Virtuozzo Infrastructure. There can be multiple such IP subnets, each of which will be assigned to the router instance and will be routed to another VLAN inside the vRack. In Virtuozzo Infrastructure, such IP subnets will appear as physical networks with the assigned “VM public” traffic type.
VLAN C is used for internal communication between servers.
VLAN D is dedicated to storage.
VLANs C and D are private networks without external connectivity where one can use any RFC 1918 reserved IP addresses.
Installation considerations for Virtuozzo Infrastructure
OVHcloud bare metal instances have the IPMI console available in the OVHcloud Control panel. The distribution ISO image for Virtuozzo Infrastructure can be mapped as a virtual CD-ROM in the OVHcloud Control panel. To boot from the distribution ISO image, a server needs to enter the BIOS setup to change the boot option to the UEFI-based virtual CD-ROM. Since the remote file mapping is performed via the internet, booting and running the installer is quite slow.
Virtuozzo Infrastructure has a GUI-based installer. After loading Anaconda, the installer switches to the graphical interface. The the web-based console panel may fail to initialize the X Window System that runs the graphical interface. To prevent this, the timeout for initializing the X Window System should be increased. Do the following:
1. Stop the installer on the first GRUB screen by pressing E.

2. Add inst.xtimeout=480 ın the kernel parameters to increase the X Window System timeout to 480 seconds.

3. Press CTRL+X to continue the boot process with the provided parameters, and then wait for the X Window System to run the graphical interface.
Enjoy!