Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 0 additions & 14 deletions testing/kuttl/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,20 +44,6 @@ There are two ways to run a single test in isolation:
- using an env var with the make target: `KUTTL_TEST='kuttl test --test <test-name>' make check-kuttl`
- using `kubectl kuttl --test` flag: `kubectl kuttl test testing/kuttl/e2e-generated --test <test-name>`

### Writing additional tests

To make it easier to read tests, we want to put our `assert.yaml`/`errors.yaml` files after the
files that create/update the objects for a step. To achieve this, infix an extra `-` between the
step number and the object/step name.

For example, if the `00` test step wants to create a cluster and then assert that the cluster is ready,
the files would be named

```yaml
00--cluster.yaml # note the extra `-` to ensure that it sorts above the following file
00-assert.yaml
```

### Generating tests

KUTTL is good at setting up K8s objects for testing, but does not have a native way to dynamically
Expand Down
2 changes: 1 addition & 1 deletion testing/kuttl/e2e/delete-namespace/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@
* Check that nothing remains.

Note: KUTTL provides a `$NAMESPACE` var that can be used in scripts/commands,
but which cannot be used in object definition yamls (like `01--cluster.yaml`).
but which cannot be used in object definition yamls (like `01-cluster.yaml`).
Therefore, we use a given, non-random namespace that is defined in the makefile
and generated with `generate-kuttl`.
4 changes: 2 additions & 2 deletions testing/kuttl/e2e/exporter-password-change/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Exporter Password Change

## 00--create-cluster:
## 00-create-cluster:
The TestStep will:

1) Apply the `files/inital-postgrescluster.yaml` file to create a cluster with monitoring enabled
Expand All @@ -13,7 +13,7 @@ The TestStep will:

This TestAssert will loop through a script until:
1) the instance pod has the `ContainersReady` condition with status `true`
2) the asserts from `00--create-cluster` are met.
2) the asserts from `00-create-cluster` are met.

## 01-assert:

Expand Down
14 changes: 7 additions & 7 deletions testing/kuttl/e2e/major-upgrade-missing-image/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,31 +6,31 @@ PostgresCluster spec or via the RELATED_IMAGES environment variables.

### Basic PGUpgrade controller and CRD instance validation

* 01--valid-upgrade: create a valid PGUpgrade instance
* 01-valid-upgrade: create a valid PGUpgrade instance
* 01-assert: check that the PGUpgrade instance exists and has the expected status

### Verify new statuses for missing required container images

* 10--cluster: create the cluster with an unavailable image (i.e. Postgres 11)
* 10-cluster: create the cluster with an unavailable image (i.e. Postgres 11)
* 10-assert: check that the PGUpgrade instance has the expected reason: "PGClusterNotShutdown"
* 11-shutdown-cluster: set the spec.shutdown value to 'true' as required for upgrade
* 11-assert: check that the new reason is set, "PGClusterPrimaryNotIdentified"

### Update to an available Postgres version, start and upgrade PostgresCluster

* 12--start-and-update-version: update the Postgres version on both CRD instances and set 'shutdown' to false
* 12-start-and-update-version: update the Postgres version on both CRD instances and set 'shutdown' to false
* 12-assert: verify that the cluster is running and the PGUpgrade instance now has the new status info with reason: "PGClusterNotShutdown"
* 13--shutdown-cluster: set spec.shutdown to 'true'
* 13-shutdown-cluster: set spec.shutdown to 'true'
* 13-assert: check that the PGUpgrade instance has the expected reason: "PGClusterMissingRequiredAnnotation"
* 14--annotate-cluster: set the required annotation
* 14-annotate-cluster: set the required annotation
* 14-assert: verify that the upgrade succeeded and the new Postgres version shows in the cluster's status
* 15--start-cluster: set the new Postgres version and spec.shutdown to 'false'
* 15-start-cluster: set the new Postgres version and spec.shutdown to 'false'

### Verify upgraded PostgresCluster

* 15-assert: verify that the cluster is running
* 16-check-pgbackrest: check that the pgbackrest setup has successfully completed
* 17--check-version: check the version reported by PostgreSQL
* 17-check-version: check the version reported by PostgreSQL
* 17-assert: assert the Job from the previous step succeeded


16 changes: 8 additions & 8 deletions testing/kuttl/e2e/scaledown/readme.MD
Original file line number Diff line number Diff line change
Expand Up @@ -8,24 +8,24 @@ have the expected number of pods.

### From two sets to one set

* 00--create-cluster: create the cluster with two instance sets, one replica each
* 00-create-cluster: create the cluster with two instance sets, one replica each
* 00-assert: check that the cluster exists with the expected status
* 01--update-cluster: update the cluster to remove one instance set
* 01-update-cluster: update the cluster to remove one instance set
* 01-assert: check that the cluster exists with the expected status
* 02--delete-cluster
* 02-delete-cluster

### From one set with multiple replicas to one set with one replica

* 10--create-cluster: create the cluster with one instance set with two replicas
* 10-create-cluster: create the cluster with one instance set with two replicas
* 10-assert: check that the cluster exists with the expected status
* 11-annotate: set the roles as labels on the pods
* 12--update-cluster: update the cluster to remove one replica
* 12-update-cluster: update the cluster to remove one replica
* 12-assert: check that the cluster exists with the expected status; and that the `master` pod that exists was the `master` before the scaledown
* 13--delete-cluster: delete the cluster
* 13-delete-cluster: delete the cluster

### From two sets with variable replicas to two set with one replica each

* 20--create-cluster: create the cluster with two instance sets, with two and one replica
* 20-create-cluster: create the cluster with two instance sets, with two and one replica
* 20-assert: check that the cluster exists with the expected status
* 21--update-cluster: update the cluster to reduce the two-replica instance to one-replica
* 21-update-cluster: update the cluster to reduce the two-replica instance to one-replica
* 21-assert: check that the cluster exists with the expected status
Loading