How to deploy nested Virtuozzo Infrastructure clusters

This guide describes how to deploy nested Virtuozzo Infrastructure clusters for testing or development purposes. You can deploy such clusters on top of a Virtuozzo Infrastructure cluster with enabled nested virtualization.

You can find the original guide at https://github.com/virtuozzo/vhideploy.

Prerequisites

1. Deploy a Virtuozzo Infrastructure cluster.

2. Create the compute cluster.

Preparing the physical Virtuozzo Infrastructure cluster

1. Check if nested virtualization is enabled on your physical Virtuozzo Infrastructure server:

  • [For Intel-based systems] Connect to your server via SSH and run:

    1
    
    # cat /sys/module/kvm_intel/parameters/nested
    
  • [For AMD-based systems] Connect to your server via SSH and run:

    1
    
    # cat /sys/module/kvm_amd/parameters/nested
    

If the command output is either Y or 1, nested virtualization is enabled; if the output is either N or 0, nested virtualization is disabled.

Warning! Nested virtualization is disabled if the processor has issues with nested support. In this case, enabling nested virtualization is strongly not recommended.

2. Enable nested virtualization for virtual machines by adding the required flag to your CPU model. For example:

  • [For Intel-based systems] Add the vmx flag:

    1
    
    # vinfra service compute set --cpu-model Broadwell-noTSX-IBRS --cpu-features vmx
    

    You can now create a VM to verify that nested virtualization is enabled for it. On the node that hosts the VM, run:

    1
    2
    
    # virsh dumpxml <vm_uuid> | grep vmx
          <feature policy='require' name='vmx'/>
    
  • [For AMD-based systems] Add the svm flag:

    1
    
    # vinfra service compute set --cpu-model EPYC-IBPB --cpu-features svm
    

    You can now create a VM to verify that nested virtualization is enabled for it. On the node that hosts the VM, run:

    1
    2
    
    # virsh dumpxml <vm_uuid> | grep svm
          <feature policy='require' name='svm'/>
    

3. Configure physical and virtual compute networks. Ensure that the private network does not have a default gateway.

4. Create compute flavors named vhimaster (16 vCPUs, 42 GB of RAM) and vhislave (16 vCPUs, 32 GB of RAM) that we will be using by default. You can create your custom flavors with at least 8 vCPUs and 24 GB of RAM.

5. Upload the latest Virtuozzo Infrastructure QCOW2 image to your cluster in the admin panel or command-line interface:

5.1. Connect to your physical management node via SSH.

5.2. Download the latest Virtuozzo Infrastructure QCOW2 image:

1
# wget https://virtuozzo.s3.amazonaws.com/vzlinux-iso-hci-latest.qcow2

5.3. Upload this image to the compute cluster:

1
# vinfra service compute image create vhi-latest --disk-format qcow2 --container-format bare --file vzlinux-iso-hci-latest.qcow2 --public --wait

Connecting to the OpenStack CLI remotely

1. Install the OpenStack command-line client on your local machine. For example, for macOS, run:

1
# brew install openstackclient

2. Download the deployment scripts to your local machine:

1
# git clone https://github.com/virtuozzo/vhideploy.git

3. Create your OpenStack source file or edit project.sh as follows:

  • OS_PROJECT_DOMAIN_NAME is the name of the domain where to deploy the stack
  • OS_USER_DOMAIN_NAME is the name of the user domain
  • OS_PROJECT_NAME is the name of the project where to deploy the stack
  • OS_USERNAME is the user name
  • OS_PASSWORD is the user password
  • OS_AUTH_URL is the OpenStack endpoint URL (the endpoint must be published and available)

4. Load the source file:

1
# source project.sh

5. Check that you can connect to the OpenStack API:

1
2
# openstack --insecure server list
// Use --insecure option if your cluster uses a self-signed certificate

Deploying a nested Virtuozzo Infrastructure cluster

1. Read about OpenStack Heat.

2. Connect to the OpenStack CLI from your local machine.

3. Deploy the Heat stack by using the following command:

  • To create a Virtuozzo Infrastructure cluster with compute and high availability enabled (recommended configuration):

    1
    
    # openstack --insecure stack create <stack_name> -t vip.yaml --parameter image="vhi-latest" --parameter stack_type="hacompute" --parameter private_network="private" --parameter slave_count="2" --parameter compute_addons="k8saas,lbaas" --parameter cluster_password="<password>"
    
  • To create a Virtuozzo Infrastructure cluster with compute (minimum configuration):

    1
    
    # openstack --insecure stack create <stack_name> -t vip.yaml --parameter image="vhi-latest" --parameter stack_type="compute" --parameter private_network="private" --parameter slave_count="2" --parameter cluster_password="<password>"
    

    Where:

  • <stack_name> is the name for your OpenStack Heat stack

  • image is the name of the source image in the QCOW2 format

  • private_network is the name of the virtual compute network (the virtual network must be connected to the physical compute network via a virtual router with SNAT)

  • public_network is the name of the physical network (this network must have DHCP enabled and DNS configured, the default name is public)

  • slave_count is the number of cluster nodes in addition to the management nodes (for the HA configuration, the minimum value must be 2; the default value is 4)

  • stack_type is the Virtuozzo Infrastructure deployment mode:

    • compute deploys a cluster with storage and compute roles
    • hacompute deploys a cluster with storage and compute roles, management nodes with HA
  • master_flavor is the name of the flavor to use for management nodes (the default flavor is vhimaster)

  • slave_flavor is the name of the flavor to use for other cluster nodes (the default flavor is vhislave)

  • storage_policy is the name of the storage policy to use (the default policy is default)

  • compute_addons specifies add-on services to be automatically installed along with the compute cluster

  • cluster_password is the password of the admin user (the default password is Virtuozzo1)

    The stack deployment takes at least 10 minutes.

4. Check the stack status:

1
# openstack --insecure stack list

5. Once the deployment is complete, access the nested cluster by entering the public IP address of its management node https://<mn_node_ip>:8888 in your browser with the provided password. Check the status of the compute cluster and other services.

6. Reconfigure the public network:

6.1. In the admin panel, go to ComputeNetwork.

6.2. Delete the network public.

6.3. Create a new network with the following parameters:

  • Type: Physical
  • Name: public
  • Infrastructure network: Public
  • Untagged
  • Subnet: IPv4 with the configured CIDR, gateway, and DNS
  • Access: Full to All projects

Enjoy!

Next steps

1. Connect to the OpenStack CLI.

2. [Optional] Configure the OpenStack endpoint.