Skip to content

Commit 7a7aad3

Browse files
authored
feat: added support to install required binary jq and kubectl if they do not exist. This can be disabled using the install_required_binaries boolean. NOTE: public access is required to pull the binaries from the internet. The binaries will be placed in /tmp. (#867)
1 parent d9499e3 commit 7a7aad3

15 files changed

+159
-87
lines changed

README.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,12 @@ Optionally, the module supports advanced security group management for the worke
1515

1616
### Before you begin
1717

18-
- Ensure that you have an up-to-date version of the [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started).
19-
- Ensure that you have an up-to-date version of the [IBM Cloud Kubernetes service CLI](https://cloud.ibm.com/docs/containers?topic=containers-kubernetes-service-cli).
20-
- Ensure that you have an up-to-date version of the [IBM Cloud VPC Infrastructure service CLI](https://cloud.ibm.com/docs/vpc?topic=vpc-vpc-reference). Only required if providing additional security groups with the `var.additional_lb_security_group_ids`.
21-
- Ensure that you have an up-to-date version of the [jq](https://jqlang.github.io/jq).
22-
- Ensure that you have an up-to-date version of the [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).
18+
- Ensure that you have an up-to-date version of [curl](https://curl.se/docs/manpage.html).
19+
- Ensure that you have an up-to-date version of [tar](https://www.gnu.org/software/tar/).
20+
- [OPTIONAL] Ensure that you have an up-to-date version of the [jq](https://jqlang.github.io/jq).
21+
- [OPTIONAL] Ensure that you have an up-to-date version of the [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).
22+
23+
By default, the module automatically downloads the required dependencies if they are not already installed. You can disable this behavior by setting `install_required_binaries` to `false`. When enabled, the module fetches dependencies from official online binaries (requires public internet).
2324

2425
<!-- Below content is automatically populated via pre-commit hook -->
2526
<!-- BEGIN OVERVIEW HOOK -->
@@ -323,6 +324,7 @@ Optionally, you need the following permissions to attach Access Management tags
323324
| [kubernetes_config_map_v1_data.set_autoscaling](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map_v1_data) | resource |
324325
| [null_resource.config_map_status](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
325326
| [null_resource.confirm_network_healthy](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
327+
| [null_resource.install_required_binaries](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
326328
| [null_resource.ocp_console_management](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
327329
| [time_sleep.wait_for_auth_policy](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource |
328330
| [ibm_container_addons.existing_addons](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_addons) | data source |
@@ -359,6 +361,7 @@ Optionally, you need the following permissions to attach Access Management tags
359361
| <a name="input_existing_secrets_manager_instance_crn"></a> [existing\_secrets\_manager\_instance\_crn](#input\_existing\_secrets\_manager\_instance\_crn) | CRN of the Secrets Manager instance where Ingress certificate secrets are stored. If 'enable\_secrets\_manager\_integration' is set to true then this value is required. | `string` | `null` | no |
360362
| <a name="input_force_delete_storage"></a> [force\_delete\_storage](#input\_force\_delete\_storage) | Flag indicating whether or not to delete attached storage when destroying the cluster - Default: false | `bool` | `false` | no |
361363
| <a name="input_ignore_worker_pool_size_changes"></a> [ignore\_worker\_pool\_size\_changes](#input\_ignore\_worker\_pool\_size\_changes) | Enable if using worker autoscaling. Stops Terraform managing worker count | `bool` | `false` | no |
364+
| <a name="input_install_required_binaries"></a> [install\_required\_binaries](#input\_install\_required\_binaries) | When set to true, a script will run to check if `kubectl` and `jq` exist on the runtime and if not attempt to download them from the public internet and install them to /tmp. Set to false to skip running this script. | `bool` | `true` | no |
362365
| <a name="input_kms_config"></a> [kms\_config](#input\_kms\_config) | Use to attach a KMS instance to the cluster. If account\_id is not provided, defaults to the account in use. | <pre>object({<br/> crk_id = string<br/> instance_id = string<br/> private_endpoint = optional(bool, true) # defaults to true<br/> account_id = optional(string) # To attach KMS instance from another account<br/> wait_for_apply = optional(bool, true) # defaults to true so terraform will wait until the KMS is applied to the master, ready and deployed<br/> })</pre> | `null` | no |
363366
| <a name="input_manage_all_addons"></a> [manage\_all\_addons](#input\_manage\_all\_addons) | Instructs Terraform to manage all cluster addons, even if addons were installed outside of the module. If set to 'true' this module destroys any addons that were installed by other sources. | `bool` | `false` | no |
364367
| <a name="input_number_of_lbs"></a> [number\_of\_lbs](#input\_number\_of\_lbs) | The number of LBs to associated the `additional_lb_security_group_names` security group with. | `number` | `1` | no |

main.tf

Lines changed: 32 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,8 @@ locals {
4949

5050
# for versions older than 4.15, this value must be null, or provider gives error
5151
disable_outbound_traffic_protection = startswith(local.ocp_version, "4.14") ? null : var.disable_outbound_traffic_protection
52+
53+
binaries_path = "/tmp"
5254
}
5355

5456
# Local block to verify validations for OCP AI Addon.
@@ -101,6 +103,20 @@ locals {
101103
default_wp_validation = local.rhcos_check ? true : tobool("If RHCOS is used with this cluster, the default worker pool should be created with RHCOS.")
102104
}
103105

106+
resource "null_resource" "install_required_binaries" {
107+
count = var.install_required_binaries && (var.verify_worker_network_readiness || var.enable_ocp_console != null || lookup(var.addons, "cluster-autoscaler", null) != null) ? 1 : 0
108+
triggers = {
109+
verify_worker_network_readiness = var.verify_worker_network_readiness
110+
cluster_autoscaler = lookup(var.addons, "cluster-autoscaler", null) != null
111+
enable_ocp_console = var.enable_ocp_console
112+
}
113+
provisioner "local-exec" {
114+
# Using the script from the kube-audit module to avoid code duplication.
115+
command = "${path.module}/modules/kube-audit/scripts/install-binaries.sh ${local.binaries_path}"
116+
interpreter = ["/bin/bash", "-c"]
117+
}
118+
}
119+
104120
# Lookup the current default kube version
105121
data "ibm_container_cluster_versions" "cluster_versions" {
106122
resource_group_id = var.resource_group_id
@@ -478,10 +494,14 @@ resource "null_resource" "confirm_network_healthy" {
478494
# Worker pool creation can start before the 'ibm_container_vpc_cluster' completes since there is no explicit
479495
# depends_on in 'ibm_container_vpc_worker_pool', just an implicit depends_on on the cluster ID. Cluster ID can exist before
480496
# 'ibm_container_vpc_cluster' completes, so hence need to add explicit depends on against 'ibm_container_vpc_cluster' here.
481-
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]
497+
depends_on = [null_resource.install_required_binaries, ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]
498+
499+
triggers = {
500+
verify_worker_network_readiness = var.verify_worker_network_readiness
501+
}
482502

483503
provisioner "local-exec" {
484-
command = "${path.module}/scripts/confirm_network_healthy.sh"
504+
command = "${path.module}/scripts/confirm_network_healthy.sh ${local.binaries_path}"
485505
interpreter = ["/bin/bash", "-c"]
486506
environment = {
487507
KUBECONFIG = data.ibm_container_cluster_config.cluster_config[0].config_file_path
@@ -494,9 +514,12 @@ resource "null_resource" "confirm_network_healthy" {
494514
##############################################################################
495515
resource "null_resource" "ocp_console_management" {
496516
count = var.enable_ocp_console != null ? 1 : 0
497-
depends_on = [null_resource.confirm_network_healthy]
517+
depends_on = [null_resource.install_required_binaries, null_resource.confirm_network_healthy]
518+
triggers = {
519+
enable_ocp_console = var.enable_ocp_console
520+
}
498521
provisioner "local-exec" {
499-
command = "${path.module}/scripts/enable_disable_ocp_console.sh"
522+
command = "${path.module}/scripts/enable_disable_ocp_console.sh ${local.binaries_path}"
500523
interpreter = ["/bin/bash", "-c"]
501524
environment = {
502525
KUBECONFIG = data.ibm_container_cluster_config.cluster_config[0].config_file_path
@@ -568,10 +591,13 @@ locals {
568591

569592
resource "null_resource" "config_map_status" {
570593
count = lookup(var.addons, "cluster-autoscaler", null) != null ? 1 : 0
571-
depends_on = [ibm_container_addons.addons]
594+
depends_on = [null_resource.install_required_binaries, ibm_container_addons.addons]
572595

596+
triggers = {
597+
cluster_autoscaler = lookup(var.addons, "cluster-autoscaler", null) != null
598+
}
573599
provisioner "local-exec" {
574-
command = "${path.module}/scripts/get_config_map_status.sh"
600+
command = "${path.module}/scripts/get_config_map_status.sh ${local.binaries_path}"
575601
interpreter = ["/bin/bash", "-c"]
576602
environment = {
577603
KUBECONFIG = data.ibm_container_cluster_config.cluster_config[0].config_file_path
@@ -759,7 +785,6 @@ resource "time_sleep" "wait_for_auth_policy" {
759785
create_duration = "30s"
760786
}
761787

762-
763788
resource "ibm_container_ingress_instance" "instance" {
764789
count = var.enable_secrets_manager_integration ? 1 : 0
765790
depends_on = [time_sleep.wait_for_auth_policy]

modules/kube-audit/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,7 @@ No modules.
7070
| Name | Type |
7171
|------|------|
7272
| [helm_release.kube_audit](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
73+
| [null_resource.install_required_binaries](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
7374
| [null_resource.set_audit_log_policy](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
7475
| [null_resource.set_audit_webhook](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
7576
| [time_sleep.wait_for_kube_audit](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource |
@@ -89,6 +90,7 @@ No modules.
8990
| <a name="input_cluster_id"></a> [cluster\_id](#input\_cluster\_id) | The ID of the cluster to deploy the log collection service in. | `string` | n/a | yes |
9091
| <a name="input_cluster_resource_group_id"></a> [cluster\_resource\_group\_id](#input\_cluster\_resource\_group\_id) | The resource group ID of the cluster. | `string` | n/a | yes |
9192
| <a name="input_ibmcloud_api_key"></a> [ibmcloud\_api\_key](#input\_ibmcloud\_api\_key) | The IBM Cloud api key to generate an IAM token. | `string` | n/a | yes |
93+
| <a name="input_install_required_binaries"></a> [install\_required\_binaries](#input\_install\_required\_binaries) | When set to true, a script will run to check if `kubectl` and `jq` exist on the runtime and if not attempt to download them from the public internet and install them to /tmp. Set to false to skip running this script. | `bool` | `true` | no |
9294
| <a name="input_region"></a> [region](#input\_region) | The IBM Cloud region where the cluster is provisioned. | `string` | n/a | yes |
9395
| <a name="input_use_private_endpoint"></a> [use\_private\_endpoint](#input\_use\_private\_endpoint) | Set this to true to force all api calls to use the IBM Cloud private endpoints. | `bool` | `false` | no |
9496
| <a name="input_wait_till"></a> [wait\_till](#input\_wait\_till) | To avoid long wait times when you run your Terraform code, you can specify the stage when you want Terraform to mark the cluster resource creation as completed. Depending on what stage you choose, the cluster creation might not be fully completed and continues to run in the background. However, your Terraform code can continue to run without waiting for the cluster to be fully created. Supported args are `MasterNodeReady`, `OneWorkerNodeReady`, `IngressReady` and `Normal` | `string` | `"IngressReady"` | no |

modules/kube-audit/main.tf

Lines changed: 25 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,22 @@
1+
locals {
2+
binaries_path = "/tmp"
3+
}
4+
5+
resource "null_resource" "install_required_binaries" {
6+
count = var.install_required_binaries ? 1 : 0
7+
triggers = {
8+
audit_log_policy = var.audit_log_policy
9+
audit_deployment_name = var.audit_deployment_name
10+
audit_namespace = var.audit_namespace
11+
audit_webhook_listener_image = var.audit_webhook_listener_image
12+
audit_webhook_listener_image_tag_digest = var.audit_webhook_listener_image_tag_digest
13+
}
14+
provisioner "local-exec" {
15+
command = "${path.module}/scripts/install-binaries.sh ${local.binaries_path}"
16+
interpreter = ["/bin/bash", "-c"]
17+
}
18+
}
19+
120
data "ibm_container_cluster_config" "cluster_config" {
221
cluster_name_id = var.cluster_id
322
config_dir = "${path.module}/kubeconfig"
@@ -19,11 +38,12 @@ locals {
1938
}
2039

2140
resource "null_resource" "set_audit_log_policy" {
41+
depends_on = [null_resource.install_required_binaries]
2242
triggers = {
2343
audit_log_policy = var.audit_log_policy
2444
}
2545
provisioner "local-exec" {
26-
command = "${path.module}/scripts/set_audit_log_policy.sh ${var.audit_log_policy}"
46+
command = "${path.module}/scripts/set_audit_log_policy.sh ${var.audit_log_policy} ${local.binaries_path}"
2747
interpreter = ["/bin/bash", "-c"]
2848
environment = {
2949
KUBECONFIG = data.ibm_container_cluster_config.cluster_config.config_file_path
@@ -40,7 +60,7 @@ locals {
4060
}
4161

4262
resource "helm_release" "kube_audit" {
43-
depends_on = [null_resource.set_audit_log_policy, data.ibm_container_vpc_cluster.cluster]
63+
depends_on = [null_resource.install_required_binaries, null_resource.set_audit_log_policy, data.ibm_container_vpc_cluster.cluster]
4464
name = var.audit_deployment_name
4565
chart = local.kube_audit_chart_location
4666
timeout = 1200
@@ -72,7 +92,7 @@ resource "helm_release" "kube_audit" {
7292
]
7393

7494
provisioner "local-exec" {
75-
command = "${path.module}/scripts/confirm-rollout-status.sh ${var.audit_deployment_name} ${var.audit_namespace}"
95+
command = "${path.module}/scripts/confirm-rollout-status.sh ${var.audit_deployment_name} ${var.audit_namespace} ${local.binaries_path}"
7696
interpreter = ["/bin/bash", "-c"]
7797
environment = {
7898
KUBECONFIG = data.ibm_container_cluster_config.cluster_config.config_file_path
@@ -96,12 +116,12 @@ locals {
96116
# }
97117

98118
resource "null_resource" "set_audit_webhook" {
99-
depends_on = [time_sleep.wait_for_kube_audit]
119+
depends_on = [null_resource.install_required_binaries, time_sleep.wait_for_kube_audit]
100120
triggers = {
101121
audit_log_policy = var.audit_log_policy
102122
}
103123
provisioner "local-exec" {
104-
command = "${path.module}/scripts/set_webhook.sh ${var.region} ${var.use_private_endpoint} ${var.cluster_config_endpoint_type} ${var.cluster_id} ${var.cluster_resource_group_id} ${var.audit_log_policy != "default" ? "verbose" : "default"}"
124+
command = "${path.module}/scripts/set_webhook.sh ${var.region} ${var.use_private_endpoint} ${var.cluster_config_endpoint_type} ${var.cluster_id} ${var.cluster_resource_group_id} ${var.audit_log_policy != "default" ? "verbose" : "default"} ${local.binaries_path}"
105125
interpreter = ["/bin/bash", "-c"]
106126
environment = {
107127
IAM_API_KEY = var.ibmcloud_api_key

modules/kube-audit/scripts/confirm-rollout-status.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,7 @@ set -e
55
deployment=$1
66
namespace=$2
77

8+
# The binaries downloaded by the install-binaries script are located in the /tmp directory.
9+
export PATH=$PATH:${3:-"/tmp"}
10+
811
kubectl rollout status deploy "${deployment}" -n "${namespace}" --timeout 30m
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
#!/bin/bash
2+
3+
# This script is stored in the kube-audit module because modules cannot access
4+
# scripts placed in the root module when they are invoked individually.
5+
# Placing it here also avoids duplicating the install-binaries script across modules.
6+
7+
set -o errexit
8+
set -o pipefail
9+
10+
DIRECTORY=${1:-"/tmp"}
11+
# renovate: datasource=github-tags depName=terraform-ibm-modules/common-bash-library
12+
TAG=v0.2.0
13+
14+
echo "Downloading common-bash-library version ${TAG}."
15+
16+
# download common-bash-library
17+
curl --silent \
18+
--connect-timeout 5 \
19+
--max-time 10 \
20+
--retry 3 \
21+
--retry-delay 2 \
22+
--retry-connrefused \
23+
--fail \
24+
--show-error \
25+
--location \
26+
--output "${DIRECTORY}/common-bash.tar.gz" \
27+
"https://github.com/terraform-ibm-modules/common-bash-library/archive/refs/tags/$TAG.tar.gz"
28+
29+
mkdir -p "${DIRECTORY}/common-bash-library"
30+
tar -xzf "${DIRECTORY}/common-bash.tar.gz" --strip-components=1 -C "${DIRECTORY}/common-bash-library"
31+
rm -f "${DIRECTORY}/common-bash.tar.gz"
32+
33+
# The file doesn’t exist at the time shellcheck runs, so this check is skipped.
34+
# shellcheck disable=SC1091
35+
source "${DIRECTORY}/common-bash-library/common/common.sh"
36+
37+
echo "Installing jq."
38+
install_jq "latest" "${DIRECTORY}" "true"
39+
echo "Installing kubectl."
40+
install_kubectl "latest" "${DIRECTORY}" "true"
41+
42+
rm -rf "${DIRECTORY}/common-bash-library"
43+
44+
echo "Installation complete successfully"

modules/kube-audit/scripts/set_audit_log_policy.sh

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,19 +3,21 @@
33
set -euo pipefail
44

55
AUDIT_POLICY="$1"
6+
# The binaries downloaded by the install-binaries script are located in the /tmp directory.
7+
export PATH=$PATH:${2:-"/tmp"}
68

7-
STORAGE_PROFILE="oc patch apiserver cluster --type='merge' -p '{\"spec\":{\"audit\":{\"profile\":\"$AUDIT_POLICY\"}}}'"
9+
STORAGE_PROFILE="kubectl patch apiserver cluster --type='merge' -p '{\"spec\":{\"audit\":{\"profile\":\"$AUDIT_POLICY\"}}}'"
810
MAX_ATTEMPTS=10
911
RETRY_WAIT=5
1012

11-
function check_oc_cli() {
12-
if ! command -v oc &>/dev/null; then
13-
echo "Error: OpenShift CLI (oc) is not installed. Exiting."
13+
function check_kubectl_cli() {
14+
if ! command -v kubectl &>/dev/null; then
15+
echo "Error: kubectl is not installed. Exiting."
1416
exit 1
1517
fi
1618
}
1719

18-
function apply_oc_patch() {
20+
function apply_kubectl_patch() {
1921

2022
local attempt=0
2123
while [ $attempt -lt $MAX_ATTEMPTS ]; do
@@ -38,7 +40,7 @@ function apply_oc_patch() {
3840

3941
echo "========================================="
4042

41-
check_oc_cli
42-
apply_oc_patch
43+
check_kubectl_cli
44+
apply_kubectl_patch
4345
sleep 30
4446
echo "========================================="

modules/kube-audit/scripts/set_webhook.sh

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,12 @@ CLUSTER_ID="$4"
99
RESOURCE_GROUP_ID="$5"
1010
POLICY="$6"
1111

12+
# The binaries downloaded by the install-binaries script are located in the /tmp directory.
13+
export PATH=$PATH:${7:-"/tmp"}
14+
1215
get_cloud_endpoint() {
1316
iam_cloud_endpoint="${IBMCLOUD_IAM_API_ENDPOINT:-"iam.cloud.ibm.com"}"
1417
IBMCLOUD_IAM_API_ENDPOINT=${iam_cloud_endpoint#https://}
15-
1618
cs_api_endpoint="${IBMCLOUD_CS_API_ENDPOINT:-"containers.cloud.ibm.com"}"
1719
cs_api_endpoint=${cs_api_endpoint#https://}
1820
IBMCLOUD_CS_API_ENDPOINT=${cs_api_endpoint%/global}

modules/kube-audit/variables.tf

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,3 +102,10 @@ variable "audit_webhook_listener_image_tag_digest" {
102102
error_message = "The value of the audit webhook listener image version must match the tag and sha256 image digest format"
103103
}
104104
}
105+
106+
variable "install_required_binaries" {
107+
type = bool
108+
default = true
109+
description = "When set to true, a script will run to check if `kubectl` and `jq` exist on the runtime and if not attempt to download them from the public internet and install them to /tmp. Set to false to skip running this script."
110+
nullable = false
111+
}

0 commit comments

Comments
 (0)