How to deploy Osie on Kubernetes

This guide describes how to deploy Osie on top of a Kubernetes cluster based on Virtuozzo Infrastructure.

About Osie

Osie is a modern control panel and billing system for OpenStack that provides all the tools for effortless cloud business management. Osie is officially supported for Virtuozzo Infrastructure and allows selling cloud services with the pay-as-you-go (PAYG) pricing model.

Prerequisites

1. Read about identity management to understand the need of an OpenID provider (IdP). By default, this chart will install Keycloak as an OpenID provider for the user authentication.

2. Deploy a Virtuozzo Infrastructure cluster.

3. Create the compute cluster with the Kubernetes and load balancing services.

4. Configure a storage policy named standard for boot volumes on Kubernetes master nodes. Ensure that the selected policy is available for all projects where you are planning to deploy Kubernetes.

5. Create a Kubernetes cluster.

6. Install cert-manager with a ClusterIssuer called letsencrypt.

7. Create a storage class named standard:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# cat > storage-class.yaml <<\EOT
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  namespace: default
provisioner: cinder.csi.openstack.org
parameters:
  type: standard
reclaimPolicy: Retain
allowVolumeExpansion: true
EOT

Apply the configuration file:

1
# kubectl create -f storage-class.yaml
Note: The storage policy standard must be available in your project.

8. Configure a domain where to install Osie:

  • cloud.<your-domain>.<tld> configured in your DNS pointing to your ingress IP
  • an auth.<your-domain>.<tld> subdomain for Keycloak, if you don’t already have an identity provider (IdP)

9. Install Helm on your local machine.

10. Create an ingress-nginx controller that will be exposed via a public IP address:

10.1. Create the nginx-values.yaml configuration file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# cat > nginx-values.yaml <<\EOT
controller:
  publishService:
    enabled: true
  config:
  service:
    annotations:
      service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
      loadbalancer.openstack.org/keep-floatingip: "true"
    loadBalancerIP: "xx.xx.xx.xx"
EOT

In loadBalancerIP, specify a public IP address for the controller. Use keep-floatingip to keep this IP address, otherwise it will be randomly reassigned from the subnet pool.

10.2. Create the ingress-nginx controller by using Helm:

1
# helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace -f nginx-values.yaml

Installing Osie

1. Add the Helm repository:

1
2
# helm repo add osie https://helm.osie.io
# helm repo update

2. Configure the values.yaml file:

  • To install with Keycloak included:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    
    # cat > values.yaml <<\EOT
    global:
      ingress:
        enabled: true
        hostname: "cloud.example.com"
        ingressClassName: "nginx"
        annotations:
          cert-manager.io/cluster-issuer: letsencrypt
          # Required by Keycloak when using Nginx ingress
          nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
        tls: true
    keycloak:
      ingress:
        hostname: auth.example.com
    smtp:
      password: # smtp password 
      user: # smtp user 
      starttls: true
      auth: true
      port: 587
      host: # smtp server 
      from: support@mydomain.com
      fromDisplayName: My Cloud Company
    EOT
    

    This configuration installs Keycloak as well using the Keycloak chart from Bitnami.

  • To install without Keycloak:

    Note: If you have deployed your own identity provider, you have to manually specify the Oauth2 / OpenID configuration.
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    
    # cat > values.yaml <<\EOT
    global:
      ingress:
        enabled: true
        hostname: "cloud.example.com"
        ingressClassName: "nginx"
        annotations:
          cert-manager.io/cluster-issuer: letsencrypt
        tls: true
    # OpenID configuration for the Osie client portal
    ui:
      oauth2:
        clientId: client-id
        issuerUri: https://auth.example.com/path/to/openid
    # (Optional) if you use different OpenID for the Administrators
    # otherwise the ones from the client portal will be used
    admin:
      oauth2:
        clientId: admin-client-id
        issuerUri: https://auth.example.com/path/to/openid   
    
    keycloak:
      enabled: false
    smtp:
      ...
    EOT
    

    For the complete list of configurable variables, check the values.yaml file of the Chart.

3. If you have a highly available Kubernetes cluster (with three master nodes), then you can deploy the databases and the services with replication and high availability. In this case, adjust the replicaCount and architecture as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# cat > values.yaml <<\EOT
ui:
 replicaCount: 3
admin:
 replicaCount: 3
api:
 replicaCount: 3 
keycloak:
  replicaCount: 3
  postgresql:
    architecture: replication
mongodb:
   architecture: replicaset
   replicaCount: 3
redis:
   architecture: replication
EOT

4. Install Osie by using the the created values.yaml file:

1
# helm --namespace osie upgrade --install --create-namespace osie osie/osie -f values.yaml

5. Check the Kubernetes pods in the namespace:

1
2
3
4
5
6
7
8
9
# kubectl -n osie get pods
osie-admin-5dc5b4ff59-hbzmz     1/1     Running   0          22h
osie-api-0                      1/1     Running   0          22h
osie-keycloak-0                 1/1     Running   0          28h
osie-mongodb-5fc58bbc78-mc6xw   1/1     Running   0          28h
osie-postgresql-0               1/1     Running   0          28h
osie-rabbitmq-0                 1/1     Running   0          28h
osie-redis-master-0             1/1     Running   0          28h
osie-ui-566c759d8-srskk         1/1     Running   0          22h

6. Check the ingress hostnames:

1
2
3
4
5
# kubectl -n osie get ingress
osie-admin      nginx   cloud.example.com        12.34.56.78   80, 443   28h
osie-api        nginx   cloud.example.com        12.34.56.78   80, 443   28h
osie-keycloak   nginx   cloud.example.com        12.34.56.78   80, 443   28h
osie-ui         nginx   cloud.example.com        12.34.56.78   80, 443   28h

Now, Osie is installed on your Kubernetes cluster and ready to be used.

Performing post-installation steps

Saving the bcrypt password

Osie encrypts some sensitive information that’s stored in the database, such as passwords and access keys. It uses a bcrypt symmetric key configured as an environment variable (OSIE_ENCRYPTION_DEFAULT_KEY). Since the encryption is symmetric, the same key must be used to decrypt the data. Therefore, is very important the key is not lost, otherwise some data from the database cannot be decrypted.

The helm chart generates a random bcrypt password key that is saved inside the <release-name>-bcrypt secret. It is recommended to save the key somewhere externally as well, so that you can reuse it in the event of a disaster recovery.

To retrieve the bcrypt password and save it somewhere externally, run:

1
# kubectl -n osie get secret osie-bcrypt -o json | jq -r '.data."bcrypt-password"' | base64 -d

Logging in to Osie admin panel

The admin panel should be accessible at https://cloud.example.com/osie_admin.

The default admin username is osie_admin.

  • To retrieve the admin password from the Kubernetes secret if you have jq installed, run:

    1
    
    # kubectl -n osie get secret osie-keycloak -o json | jq -r '.data."admin-password"' | base64 -d
    
  • To retrieve the secret and base64 decode the data.admin-password key, run:

    1
    
    # kubectl -n osie get secret osie-keycloak -o yaml 
    

Upgrading or reconfiguring

You can use the helm upgrade command to upgrade to newer versions of Osie or to restart the components if you make changes to values.yaml.

If you used the chart to install Keycloak as well, it is recommended to prevent the keycloakConfigCli running again, since that is only needed during the first installation.

1. Disable keycloakConfigCli to run again by making the following changes in values.yaml:

1
2
3
keycloak:
  keycloakConfigCli:
    enabled: false

2. Update the helm repository to get the latest chart version:

1
# helm repo update

3. Run the helm upgrade command:

1
# helm --namespace osie upgrade osie osie/osie -f values.yaml

Automated backups

Configure automated backups with Velero.

Setting up the Osie notifier container

The Osie notifier container (osie-notifier) allows Osie to get notifications from Virtuozzo Infrastructure by using RabbitMQ.

1. Create and configure a virtual machine for osie-notifier:

1.1. In Virtuozzo Infrastructure, create a virtual machine with the following parameters:

  • Image: Ubuntu 22.04
  • vCPU: 1
  • RAM: 2
  • Storage: 20 GB
  • The network must have an access to a private infrastructure network and accessible from Osie.

1.2. Connect to the VM and install Docker inside it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# sudo apt update
# sudo apt upgrade -y  
# sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) # # # stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# sudo apt update
# apt-cache policy docker-ce
# sudo apt install docker-ce -y
# sudo systemctl status docker
# sudo usermod -aG docker ${USER}
# exit

1.3. Log in to your VM again and install Docker Compose:

1
2
3
4
# mkdir -p ~/.docker/cli-plugins/
# curl -SL https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
# chmod +x ~/.docker/cli-plugins/docker-compose
# docker compose version

1.4. If you run this VM on Virtuozzo Cloud, you need to perform additional steps:

1.4.1. Configure MTU for Docker:

Important: Make sure that the Docker MTU settings match those of your VM network. Otherwise, some applications running inside Docker containers might not work.

For example, you have a VM with the following network configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1413 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:fd:75:1a brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    inet 10.0.0.64/24 metric 100 brd 10.0.0.255 scope global dynamic ens4
       valid_lft 24231sec preferred_lft 24231sec
    inet6 fe80::f816:3eff:fefd:751a/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:44:a4:40:fe brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:44ff:fea4:40fe/64 scope link 
       valid_lft forever preferred_lft forever

Docker will have issues with this VM, as your ens4 and docker0 network interfaces have different MTU settings and Docker sets MTU to 1500 bytes by default. To change this, create a file with a new MTU setting, and then restart Docker:

1
2
3
# vi /etc/docker/daemon.json
{"mtu": 1413}
# systemctl restart docker.service

2. Configure Virtuozzo Infrastructure to access RabbitMQ. To do this, you need to open port 5672 on all of your management nodes. RabbitMQ is available on a network with the traffic type Internal management assigned.

2.1. In the Virtuozzo Infrastructure admin panel, go to the Infrastructure > Networks screen and click Create traffic type. Specify the following parameters:

  • Name: rabbitmq
  • Port: 5672
  • Access rules: keep the default settings

2.2. Assign the rabbitmq traffic type to the network that has the Internal management traffic type assigned.

2.3. Connect to the Virtuozzo Infrastructure master node and get the connection parameters for RabbitMQ:

1
2
# grep rabb /etc/kolla/nova-conductor/nova.conf
transport_url = rabbit://openstack:<rabbit_password>@rabbitmq.svc.vstoragedomain.:5672

Save <rabbit_password> as you will need to add it to the osie-container configuration later.

3. Configure and run osie-notifier:

3.1. Configure the osie-notifier container.

3.2. Modify the osie-notifier application.yaml configuration file by changing <rabbitmq_ip_1>, <rabbitmq_ip_2>, and <rabbitmq_ip_3>to the IP addresses of your management nodes in Virtuozzo Infrastructure.

An example configuration file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
osie:
  dsn:
  - https://<osie_server_url>/api-v1/os-notification/652d0dd8327e272e0e45e871/RegionOne
spring:
  rabbitmq:
    username: openstack
    password: <rabbit_password>
    addresses: <rabbitmq_ip_1>:5672,<rabbitmq_ip_2>:5672,<rabbitmq_ip_3>:5672
notifications:
  queue: osie-notifier
openstack:
  topic: notifications.*
  services:
  # adapt this depending on the enabled services
  - exchange: nova
  - exchange: neutron
  - exchange: openstack
  - exchange: heat
  - exchange: cinder
server:
  port: 7476

Enjoy!