You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/source/configuration/magnum-capi.rst
+10-13Lines changed: 10 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,29 +4,28 @@ Magnum Cluster API Driver
4
4
5
5
A new driver for Magnum has been written which is an alternative to Heat (as Heat gets phased out due to maintenance burden) and instead uses the Kubernetes `Cluster API project <https://cluster-api.sigs.k8s.io>`_ to manage the OpenStack infrastructure required by Magnum clusters. The idea behind the Cluster API (CAPI) project is that infrastructure is managed using Kubernetes-style declarative APIs, which in practice means a set of Custom Resource Definitions (CRDs) and Kubernetes `operators <https://kubernetes.io/docs/concepts/extend-kubernetes/operator/>`_ to translate instances of those custom Kubernetes resources into the required OpenStack API resources. These same operators also handle resource reconciliation (i.e. when the Kubernetes custom resource is modified, the operator will make the required OpenStack API calls to reflect those changes).
6
6
7
-
The new CAPI driver and the old Heat driver are compatible and can both be active on the same deployment, and the decision of which driver is used for a given template depends on certain parameters inferred from the Magnum cluster template. For the new driver, these parameters are ``{'server_type' : 'vm', 'os': 'ubuntu', 'coe': kubernetes'}``. Drivers can be enabled and disabled using the ``disabled_drivers`` parameter in the ``[drivers]`` section of ``magnum.conf``.
7
+
The new CAPI driver and the old Heat driver are compatible and can both be active on the same deployment, and the decision of which driver is used for a given template depends on certain parameters inferred from the Magnum cluster template. For the new driver, these parameters are ``{'server_type' : 'vm', 'os': 'ubuntu', 'coe': kubernetes'}``. Drivers can be enabled and disabled using the ``disabled_drivers`` parameter in the ``[drivers]`` section of ``magnum.conf``.
8
8
9
9
Deployment Prerequisites
10
10
========================
11
11
12
-
The Cluster API architecture relies on a CAPI management cluster in order to run the aforementioned Kubernetes operators which interact directly with the OpenStack APIs. The two requirement for this management cluster are:
12
+
The Cluster API architecture relies on a CAPI management cluster in order to run the aforementioned Kubernetes operators which interact directly with the OpenStack APIs. The two requirements for this management cluster are:
13
13
14
14
1. It must be capable of reaching the public OpenStack APIs.
15
15
16
-
2. It must be reachable from the control plane nodes (either controllers or dedicated network hosts) on which the Magnum container is running (so that the Magnum can reach the IP listed in the management cluster's ``kubeconfig`` file).
16
+
2. It must be reachable from the control plane nodes (either controllers or dedicated network hosts) on which the Magnum containers are running (so that the Magnum can reach the IP listed in the management cluster's ``kubeconfig`` file).
17
17
18
-
For testing purposes, a simple `k3s <https://k3s.io>`_ cluster would suffice. For production deployments, the recommended solution is to instead set up a separate HA management cluster in an isolated OpenStack project by leveraging the CAPI management cluster configuration used in Azimuth. This approach will provide a resilient HA management cluster with a standard set of component versions which are regularly tested in Azimuth CI.
18
+
For testing purposes, a simple `k3s <https://k3s.io>`_ cluster would suffice. For production deployments, the recommended solution is to instead set up a separate HA management cluster in an isolated OpenStack project by leveraging the CAPI management cluster configuration used in `Azimuth<https://github.com/stackhpc/azimuth>`_. This approach will provide a resilient HA management cluster with a standard set of component versions that are regularly tested in Azimuth CI.
19
19
The general process for setting up this CAPI management cluster using Azimuth tooling is described here, but the `Azimuth operator documentation <https://stackhpc.github.io/azimuth-config/#deploying-azimuth>`_ should be consulted for additional information if required.
20
20
21
-
The diagram below shows the general architecture of the CAPI management cluster provisioned using Azimuth tooling. It consists of a Seed VM running a small k3s cluster (which itself is actually a CAPI management cluster but only for the purpose of managing the HA cluster) as well as a HA management cluster made up of (by default) 3 control plane VMs and 3 worker VMs. This HA cluster runs the various Kubernetes component responsible for managing Magnum tenant clusters.
22
-
21
+
The diagram below shows the general architecture of the CAPI management cluster provisioned using Azimuth tooling. It consists of a Seed VM (a terraform-provisioned OpenStack VM) running a small k3s cluster (which itself is actually a CAPI management cluster but only for the purpose of managing the HA cluster) as well as a HA management cluster made up of (by default) 3 control plane VMs and 3 worker VMs. This HA cluster runs the various Kubernetes components responsible for managing Magnum tenant clusters.
The setup and configuration of a CAPI management cluster using Azimuth tooling follows a pattern which should be familiar to Kayobe operators. There is an 'upstream' `azimuth-config <https://github.com/stackhpc/azimuth-config>`_ repository which contains recommended defaults for various configuration options (equivalent to stackhpc-kayobe-config), and then each client site will maintain an independent copy of this repository which will contain site-specific configuration. Together, these upstream and site-specific configuration repositories can set or override Ansible variables for the `azimuth-ops <https://github.com/stackhpc/ansible-collection-azimuth-ops>`_ Ansible collection, which contains the playbooks required to deploy or update a CAPI management cluster (or a full Azimuth deployment).
26
+
The setup and configuration of a CAPI management cluster using Azimuth tooling follow a pattern that should be familiar to Kayobe operators. There is an 'upstream' `azimuth-config <https://github.com/stackhpc/azimuth-config>`_ repository which contains recommended defaults for various configuration options (equivalent to stackhpc-kayobe-config), and then each client site will maintain an independent copy of this repository which will contain site-specific configuration. Together, these upstream and site-specific configuration repositories can set or override Ansible variables for the `azimuth-ops <https://github.com/stackhpc/ansible-collection-azimuth-ops>`_ Ansible collection, which contains the playbooks required to deploy or update a CAPI management cluster (or a full Azimuth deployment).
28
27
29
-
In order to deploy a CAPI management cluster for use with Magnum, first create a copy of the upstream Azimuth config repository in the client's GitHub/GitLab. To do so, follow the instructions found in the `initial repository setup <https://stackhpc.github.io/azimuth-config/repository/#initial-repository-setup>`_ section of the Azimuth operator docs. The site-specific repository should then be encrypted following `these instructions <https://stackhpc.github.io/azimuth-config/repository/secrets/>`_ to avoid leaking any secrets (such as cloud credentials) which will be added to the configuration later on.
28
+
In order to deploy a CAPI management cluster for use with Magnum, first create a copy of the upstream Azimuth config repository in the client's GitHub/GitLab. To do so, follow the instructions found in the `initial repository setup <https://stackhpc.github.io/azimuth-config/repository/#initial-repository-setup>`_ section of the Azimuth operator docs. The site-specific repository should then be encrypted following `these instructions <https://stackhpc.github.io/azimuth-config/repository/secrets/>`_ to avoid leaking any secrets (such as cloud credentials) that will be added to the configuration later on.
30
29
31
30
Next, rather than copying the ``example`` environment as recommended in the Azimuth docs, instead copy the ``capi-mgmt-example`` environment and give it a suitable site-specific name:
32
31
@@ -87,19 +86,18 @@ The general running order of the provisioning playbook is the following:
87
86
88
87
- Install the required components on the HA cluster to manage Magnum user clusters
89
88
90
-
Once the seed VM has been provisioned, it can be accessed via SSH by running ``./bin/seed-ssh`` from the root of the azimuth-config repository. Within the seed VM, the k3s cluster and the HA cluster can both be accessed using the pre-installed ``kubectl`` and ``helm`` command line tools. Both of these tools will target the k3s cluster by default; however the ``kubeconfig`` file for the HA cluster can be found in the seed's home directory (named e.g. ``kubeconfig-capi-mgmt-<site-specific-name>.yaml``).
89
+
Once the seed VM has been provisioned, it can be accessed via SSH by running ``./bin/seed-ssh`` from the root of the azimuth-config repository. Within the seed VM, the k3s cluster and the HA cluster can both be accessed using the pre-installed ``kubectl`` and ``helm`` command line tools. Both of these tools will target the k3s cluster by default; however, the ``kubeconfig`` file for the HA cluster can be found in the seed's home directory (named e.g. ``kubeconfig-capi-mgmt-<site-specific-name>.yaml``).
91
90
92
91
*Note* - The provision playbook is responsible for copying the HA ``kubeconfig`` to this location *after* the HA cluster is up and running. If you need to access the HA cluster while it is still deploying, the ``kubeconfig`` file can be found stored as a Kubernetes secret on the k3s cluster.
93
92
94
93
It is possible to reconfigure or upgrade the management cluster after initial deployment by simply re-running the ``provision_capi_mgmt`` playbook. However, it's preferable that most Day 2 ops (i.e. reconfigures and upgrades) be done via a CD Pipeline. See `these Azimuth docs <https://stackhpc.github.io/azimuth-config/deployment/automation/>`_ for more information.
95
94
96
-
97
95
Kayobe Config
98
96
==============
99
97
100
98
To configure the Magnum service with the Cluster API driver enabled, first ensure that your kayobe-config branch is up to date with |current_release_git_branch_name|.
101
99
102
-
Next, copy the CAPI management cluster's kubeconfig file into to your stackhpc-kayobe-config environment (e.g. ``<your-skc-environment>/kolla/config/magnum/kubeconfig``). This file must be Ansible vault encrypted.
100
+
Next, copy the CAPI management cluster's kubeconfig file into your stackhpc-kayobe-config environment (e.g. ``<your-skc-environment>/kolla/config/magnum/kubeconfig``). This file must be Ansible vault encrypted.
103
101
104
102
The following config should also be set in your stackhpc-kayobe-config environment:
105
103
@@ -116,11 +114,10 @@ The following config should also be set in your stackhpc-kayobe-config environme
116
114
117
115
To apply the configuration, run ``kayobe overcloud service reconfigure -kt magnum``.
118
116
119
-
120
117
Magnum Cluster Templates
121
118
========================
122
119
123
-
The clusters deployed by the Cluster API driver make use of the Ubuntu Kubernetes images built in the `azimuth-images <https://github.com/stackhpc/azimuth-images>`_ repository and then use `capi-helm-charts <https://github.com/stackhpc/capi-helm-charts>`_ to provide the Helm charts which define the clusters based on these images. Between them, these two repositories have CI jobs which regularly build and test images and Helm charts for the latest Kubernetes versions. It is therefore important to update the cluster templates on each cloud regularly to make use of these new releases.
120
+
The clusters deployed by the Cluster API driver make use of the Ubuntu Kubernetes images built in the `azimuth-images <https://github.com/stackhpc/azimuth-images>`_ repository and then use `capi-helm-charts <https://github.com/stackhpc/capi-helm-charts>`_ to provide the Helm charts which define the clusters based on these images. Between them, these two repositories have CI jobs that regularly build and test images and Helm charts for the latest Kubernetes versions. It is therefore important to update the cluster templates on each cloud regularly to make use of these new releases.
124
121
125
122
Magnum templates should be defined within an existing client-specific `openstack-config <https://github.com/stackhpc/openstack-config>`_ repository.
0 commit comments