How to deploy nested Virtuozzo Infrastructure clusters
This guide describes how to deploy nested Virtuozzo Infrastructure clusters for testing or development purposes. You can deploy such clusters on top of a Virtuozzo Infrastructure cluster with enabled nested virtualization.
You can find the original guide at https://github.com/virtuozzo/vhideploy.
Prerequisites
1. Deploy a Virtuozzo Infrastructure cluster.
2. Create the compute cluster.
Preparing the physical Virtuozzo Infrastructure cluster
1. Check if nested virtualization is enabled on your physical Virtuozzo Infrastructure server:
[For Intel-based systems] Connect to your server via SSH and run:
1# cat /sys/module/kvm_intel/parameters/nested[For AMD-based systems] Connect to your server via SSH and run:
1# cat /sys/module/kvm_amd/parameters/nested
If the command output is either Y or 1, nested virtualization is enabled; if the output is either N or 0, nested virtualization is disabled.
2. Enable nested virtualization for virtual machines by adding the required flag to your CPU model. For example:
[For Intel-based systems] Add the
vmxflag:1# vinfra service compute set --cpu-model Broadwell-noTSX-IBRS --cpu-features vmxYou can now create a VM to verify that nested virtualization is enabled for it. On the node that hosts the VM, run:
1 2# virsh dumpxml <vm_uuid> | grep vmx <feature policy='require' name='vmx'/>[For AMD-based systems] Add the
svmflag:1# vinfra service compute set --cpu-model EPYC-IBPB --cpu-features svmYou can now create a VM to verify that nested virtualization is enabled for it. On the node that hosts the VM, run:
1 2# virsh dumpxml <vm_uuid> | grep svm <feature policy='require' name='svm'/>
3. Configure physical and virtual compute networks. Ensure that the private network does not have a default gateway.
4. Create compute flavors named vhimaster (16 vCPUs, 42 GB of RAM) and vhislave (16 vCPUs, 32 GB of RAM) that we will be using by default. You can create your custom flavors with at least 8 vCPUs and 24 GB of RAM.
5. Upload the latest Virtuozzo Infrastructure QCOW2 image to your cluster in the admin panel or command-line interface:
5.1. Connect to your physical management node via SSH.
5.2. Download the latest Virtuozzo Infrastructure QCOW2 image:
| |
5.3. Upload this image to the compute cluster:
| |
Connecting to the OpenStack CLI remotely
1. Install the OpenStack command-line client on your local machine. For example, for macOS, run:
| |
2. Download the deployment scripts to your local machine:
| |
3. Create your OpenStack source file or edit project.sh as follows:
OS_PROJECT_DOMAIN_NAMEis the name of the domain where to deploy the stackOS_USER_DOMAIN_NAMEis the name of the user domainOS_PROJECT_NAMEis the name of the project where to deploy the stackOS_USERNAMEis the user nameOS_PASSWORDis the user passwordOS_AUTH_URLis the OpenStack endpoint URL (the endpoint must be published and available)
4. Load the source file:
| |
5. Check that you can connect to the OpenStack API:
| |
Deploying a nested Virtuozzo Infrastructure cluster
1. Read about OpenStack Heat.
2. Connect to the OpenStack CLI from your local machine.
3. Deploy the Heat stack by using the following command:
To create a Virtuozzo Infrastructure cluster with compute and high availability enabled (recommended configuration):
1# openstack --insecure stack create <stack_name> -t vip.yaml --parameter image="vhi-latest" --parameter stack_type="hacompute" --parameter private_network="private" --parameter slave_count="2" --parameter compute_addons="k8saas,lbaas" --parameter cluster_password="<password>"To create a Virtuozzo Infrastructure cluster with compute (minimum configuration):
1# openstack --insecure stack create <stack_name> -t vip.yaml --parameter image="vhi-latest" --parameter stack_type="compute" --parameter private_network="private" --parameter slave_count="2" --parameter cluster_password="<password>"Where:
<stack_name>is the name for your OpenStack Heat stackimageis the name of the source image in the QCOW2 formatprivate_networkis the name of the virtual compute network (the virtual network must be connected to the physical compute network via a virtual router with SNAT)public_networkis the name of the physical network (this network must have DHCP enabled and DNS configured, the default name is public)slave_countis the number of cluster nodes in addition to the management nodes (for the HA configuration, the minimum value must be 2; the default value is 4)stack_typeis the Virtuozzo Infrastructure deployment mode:computedeploys a cluster with storage and compute roleshacomputedeploys a cluster with storage and compute roles, management nodes with HA
master_flavoris the name of the flavor to use for management nodes (the default flavor is vhimaster)slave_flavoris the name of the flavor to use for other cluster nodes (the default flavor is vhislave)storage_policyis the name of the storage policy to use (the default policy is default)compute_addonsspecifies add-on services to be automatically installed along with the compute clustercluster_passwordis the password of theadminuser (the default password is Virtuozzo1)The stack deployment takes at least 10 minutes.
4. Check the stack status:
| |
5. Once the deployment is complete, access the nested cluster by entering the public IP address of its management node https://<mn_node_ip>:8888 in your browser with the provided password. Check the status of the compute cluster and other services.
6. Reconfigure the public network:
6.1. In the admin panel, go to Compute → Network.
6.2. Delete the network public.
6.3. Create a new network with the following parameters:
- Type: Physical
- Name: public
- Infrastructure network: Public
- Untagged
- Subnet: IPv4 with the configured CIDR, gateway, and DNS
- Access: Full to All projects
Enjoy!
Next steps
1. Connect to the OpenStack CLI.
2. [Optional] Configure the OpenStack endpoint.