Skip to content

Commit cc15ca8

Browse files
committed
lint changes of standard cluster and node pool
1 parent c1d7eeb commit cc15ca8

File tree

6 files changed

+267
-267
lines changed

6 files changed

+267
-267
lines changed

modules/gke-node-pool/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
| Name | Description | Type | Default | Required |
88
|------|-------------|------|---------|:--------:|
99
| autoscaling | Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.<br> - min\_node\_count: Minimum number of nodes per zone in the NodePool. Must be >=0 and <= max\_node\_count. Cannot be used with total limits.<br> - max\_node\_count: Maximum number of nodes per zone in the NodePool. Must be >= min\_node\_count. Cannot be used with total limits.<br> - total\_min\_node\_count: Total minimum number of nodes in the NodePool. Must be >=0 and <= total\_max\_node\_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.<br> - total\_max\_node\_count: Total maximum number of nodes in the NodePool. Must be >= total\_min\_node\_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.<br> - location\_policy: Location policy specifies the algorithm used when scaling-up the node pool. Location policy is supported only in 1.24.1+ clusters.<br> - "BALANCED" - Is a best effort policy that aims to balance the sizes of available zones.<br> - "ANY" - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduce preemption risk for Spot VMs. | <pre>object({<br> min_node_count = optional(number)<br> max_node_count = optional(number)<br> total_min_node_count = optional(number)<br> total_max_node_count = optional(number)<br> location_policy = optional(string)<br> })</pre> | <pre>{<br> "max_node_count": 100,<br> "min_node_count": 1<br>}</pre> | no |
10-
| cluster | The cluster to create the node pool for. Cluster must be present in location provided for clusters. May be specified in the format projects/{{project}}/locations/{{location}}/clusters/{{cluster}} or as just the name of the cluster. | `string` | n/a | yes |
10+
| cluster | The cluster to create the node pool for. Cluster must be present in location provided for clusters. May be specified in the format projects/{{project\_id}}/locations/{{location}}/clusters/{{cluster}} or as just the name of the cluster. | `string` | n/a | yes |
1111
| initial\_node\_count | The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource. WARNING: Resizing your node pool manually may change this value in your existing cluster, which will trigger destruction and recreation on the next Terraform run (to rectify the discrepancy). If you don't need this value, don't set it. | `number` | `null` | no |
1212
| kubernetes\_version | The Kubernetes version for the nodes in this pool. Note that if this field and auto\_upgrade are both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it's recommended that you specify explicit versions as Terraform will see spurious diffs when fuzzy versions are used. See the google\_container\_engine\_versions data source's version\_prefix field to approximate fuzzy versions in a Terraform-compatible way. | `string` | `null` | no |
1313
| location | The location (region or zone) of the cluster. | `string` | `null` | no |
@@ -20,7 +20,7 @@
2020
| node\_count | The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside autoscaling. | `number` | `1` | no |
2121
| node\_locations | The list of zones in which the node pool's nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If unspecified, the cluster-level node\_locations will be used. Note: node\_locations will not revert to the cluster's default set of zones upon being unset. You must manually reconcile the list of zones with your cluster. | `list(string)` | `null` | no |
2222
| placement\_policy | Specifies a custom placement policy for the nodes.<br> - type: The type of the policy. Supports a single value: COMPACT. Specifying COMPACT placement policy type places node pool's nodes in a closer physical proximity in order to reduce network latency between nodes.<br> - policy\_name: If set, refers to the name of a custom resource policy supplied by the user. The resource policy must be in the same project and region as the node pool. If not found, InvalidArgument error is returned.<br> - tpu\_topology: The TPU topology like "2x4" or "2x2x2". | <pre>object({<br> type = string<br> policy_name = optional(string)<br> tpu_topology = optional(string)<br> })</pre> | `null` | no |
23-
| project | The ID of the project in which to create the node pool. | `string` | n/a | yes |
23+
| project\_id | The ID of the project in which to create the node pool. | `string` | n/a | yes |
2424
| queued\_provisioning | Specifies node pool-level settings of queued provisioning.<br> - enabled (Required) - Makes nodes obtainable through the ProvisioningRequest API exclusively. | <pre>object({<br> enabled = bool<br> })</pre> | `null` | no |
2525
| timeouts | Timeout for cluster operations. | <pre>object({<br> create = optional(string)<br> update = optional(string)<br> delete = optional(string)<br> })</pre> | <pre>{<br> "create": "45m",<br> "delete": "45m",<br> "update": "45m"<br>}</pre> | no |
2626
| upgrade\_settings | Specify node upgrade settings to change how GKE upgrades nodes. The maximum number of nodes upgraded simultaneously is limited to 20.<br> - max\_surge:he number of additional nodes that can be added to the node pool during an upgrade. Increasing max\_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.<br> - max\_unavailable - (Optional) The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max\_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.<br> - strategy - (Default SURGE) The upgrade strategy to be used for upgrading the nodes.<br> - blue\_green\_settings: The settings to adjust blue green upgrades.<br> - standard\_rollout\_policy: Specifies the standard policy settings for blue-green upgrades.<br> - batch\_percentage: Percentage of the blue pool nodes to drain in a batch.<br> - batch\_node\_count:Number of blue nodes to drain in a batch.<br> - batch\_soak\_duration: Soak time after each batch gets drained.<br> - local\_ssd\_encryption\_mode: Possible Local SSD encryption modes: Accepted values are:<br> - STANDARD\_ENCRYPTION: The given node will be encrypted using keys managed by Google infrastructure and the keys wll be deleted when the node is deleted.<br> - EPHEMERAL\_KEY\_ENCRYPTION: The given node will opt-in for using ephemeral key for encrypting Local SSDs. The Local SSDs will not be able to recover data in case of node crash.<br> - node\_pool\_soak\_duration: Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up. | <pre>object({<br> max_surge = optional(number)<br> max_unavailable = optional(number)<br> strategy = optional(string)<br> blue_green_settings = optional(object({<br> standard_rollout_policy = object({<br> batch_percentage = optional(number)<br> batch_node_count = optional(number)<br> batch_soak_duration = optional(string)<br> })<br> node_pool_soak_duration = optional(string)<br> }))<br> })</pre> | <pre>{<br> "max_surge": 1,<br> "max_unavailable": 0,<br> "strategy": "SURGE"<br>}</pre> | no |
@@ -29,7 +29,7 @@
2929

3030
| Name | Description |
3131
|------|-------------|
32-
| id | an identifier for the resource with format {{project}}/{{location}}/{{cluster}}/{{name}} |
32+
| id | an identifier for the resource with format {{project\_id}}/{{location}}/{{cluster}}/{{name}} |
3333
| instance\_group\_urls | The resource URLs of the managed instance groups associated with this node pool. |
3434
| managed\_instance\_group\_urls | List of instance group URLs which have been assigned to this node pool. |
3535

modules/gke-node-pool/metadata.display.yaml

Lines changed: 56 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -96,22 +96,6 @@ spec:
9696
value: pd-balanced
9797
- label: pd-ssd
9898
value: pd-ssd
99-
local_ssd_encryption_mode:
100-
name: local_ssd_encryption_mode
101-
title: Local SSD Encryption Mode
102-
enumValueLabels:
103-
- label: STANDARD_ENCRYPTION
104-
value: STANDARD_ENCRYPTION
105-
- label: EPHEMERAL_KEY_ENCRYPTION
106-
value: EPHEMERAL_KEY_ENCRYPTION
107-
logging_variant:
108-
name: logging_variant
109-
title: Logging Variant
110-
enumValueLabels:
111-
- label: DEFAULT
112-
value: DEFAULT
113-
- label: MAX_THROUGHPUT
114-
value: MAX_THROUGHPUT
11599
guest_accelerator:
116100
name: guest_accelerator
117101
title: Guest Accelerator
@@ -144,6 +128,48 @@ spec:
144128
value: TIME_SHARING
145129
- label: MPS
146130
value: MPS
131+
kubelet_config:
132+
name: kubelet_config
133+
title: Kubelet Config
134+
properties:
135+
cpu_manager_policy:
136+
name: cpu_manager_policy
137+
title: CPU Manager Policy
138+
enumValueLabels:
139+
- label: none
140+
value: none
141+
- label: static
142+
value: static
143+
linux_node_config:
144+
name: linux_node_config
145+
title: Linux Node Config
146+
properties:
147+
cgroup_mode:
148+
name: cgroup_mode
149+
title: Cgroup Mode
150+
enumValueLabels:
151+
- label: CGROUP_MODE_UNSPECIFIED
152+
value: CGROUP_MODE_UNSPECIFIED
153+
- label: CGROUP_MODE_V1
154+
value: CGROUP_MODE_V1
155+
- label: CGROUP_MODE_V2
156+
value: CGROUP_MODE_V2
157+
local_ssd_encryption_mode:
158+
name: local_ssd_encryption_mode
159+
title: Local SSD Encryption Mode
160+
enumValueLabels:
161+
- label: STANDARD_ENCRYPTION
162+
value: STANDARD_ENCRYPTION
163+
- label: EPHEMERAL_KEY_ENCRYPTION
164+
value: EPHEMERAL_KEY_ENCRYPTION
165+
logging_variant:
166+
name: logging_variant
167+
title: Logging Variant
168+
enumValueLabels:
169+
- label: DEFAULT
170+
value: DEFAULT
171+
- label: MAX_THROUGHPUT
172+
value: MAX_THROUGHPUT
147173
oauth_scopes:
148174
name: oauth_scopes
149175
title: Oauth Scopes
@@ -188,46 +214,6 @@ spec:
188214
value: PREFER_NO_SCHEDULE
189215
- label: NO_EXECUTE
190216
value: NO_EXECUTE
191-
workload_metadata_config:
192-
name: workload_metadata_config
193-
title: Workload Metadata Config
194-
properties:
195-
mode:
196-
name: mode
197-
title: Mode
198-
enumValueLabels:
199-
- label: GCE_METADATA
200-
value: GCE_METADATA
201-
- label: GKE_METADATA
202-
value: GKE_METADATA
203-
- label: MODE_UNSPECIFIED
204-
value: MODE_UNSPECIFIED
205-
kubelet_config:
206-
name: kubelet_config
207-
title: Kubelet Config
208-
properties:
209-
cpu_manager_policy:
210-
name: cpu_manager_policy
211-
title: CPU Manager Policy
212-
enumValueLabels:
213-
- label: none
214-
value: none
215-
- label: static
216-
value: static
217-
linux_node_config:
218-
name: linux_node_config
219-
title: Linux Node Config
220-
properties:
221-
cgroup_mode:
222-
name: cgroup_mode
223-
title: Cgroup Mode
224-
enumValueLabels:
225-
- label: CGROUP_MODE_UNSPECIFIED
226-
value: CGROUP_MODE_UNSPECIFIED
227-
- label: CGROUP_MODE_V1
228-
value: CGROUP_MODE_V1
229-
- label: CGROUP_MODE_V2
230-
value: CGROUP_MODE_V2
231217
windows_node_config:
232218
name: windows_node_config
233219
title: Windows Node Config
@@ -242,6 +228,20 @@ spec:
242228
value: OS_VERSION_LTSC2019
243229
- label: OS_VERSION_LTSC2022
244230
value: OS_VERSION_LTSC2022
231+
workload_metadata_config:
232+
name: workload_metadata_config
233+
title: Workload Metadata Config
234+
properties:
235+
mode:
236+
name: mode
237+
title: Mode
238+
enumValueLabels:
239+
- label: GCE_METADATA
240+
value: GCE_METADATA
241+
- label: GKE_METADATA
242+
value: GKE_METADATA
243+
- label: MODE_UNSPECIFIED
244+
value: MODE_UNSPECIFIED
245245
node_count:
246246
name: node_count
247247
title: Node Count

modules/gke-node-pool/metadata.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ spec:
144144
description: The location (region or zone) of the cluster.
145145
varType: string
146146
- name: autoscaling
147-
description: " Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.\n - min_node_count: Minimum number of nodes per zone in the NodePool. Must be >=0 and <= max_node_count. Cannot be used with total limits.\n - max_node_count: Maximum number of nodes per zone in the NodePool. Must be >= min_node_count. Cannot be used with total limits.\n - total_min_node_count: Total minimum number of nodes in the NodePool. Must be >=0 and <= total_max_node_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.\n - total_max_node_count: Total maximum number of nodes in the NodePool. Must be >= total_min_node_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.\n - location_policy: Location policy specifies the algorithm used when scaling-up the node pool. Location policy is supported only in 1.24.1+ clusters.\n - \"BALANCED\" - Is a best effort policy that aims to balance the sizes of available zones.\n - \"ANY\" - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduce preemption risk for Spot VMs.\n"
147+
description: Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.
148148
varType: |-
149149
object({
150150
min_node_count = optional(number)
@@ -409,9 +409,9 @@ spec:
409409
roles:
410410
- level: Project
411411
roles:
412+
- roles/iam.serviceAccountUser
412413
- roles/compute.admin
413414
- roles/container.admin
414-
- roles/iam.serviceAccountUser
415415
services:
416416
- compute.googleapis.com
417417
- container.googleapis.com

modules/gke-standard-cluster/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ For a module with a complete configuration of a Google Cloud Platform Kubernetes
6767
| pod\_security\_policy\_config | Configuration for the [PodSecurityPolicy](https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies) feature. | <pre>object({<br> enabled = bool<br> })</pre> | `null` | no |
6868
| private\_cluster\_config | Configuration for private clusters, clusters with private nodes. | <pre>object({<br> enable_private_nodes = optional(bool)<br> enable_private_endpoint = optional(bool)<br> master_ipv4_cidr_block = optional(string)<br> private_endpoint_subnetwork = optional(string)<br> master_global_access_config = optional(object({<br> enabled = optional(bool)<br> }))<br> })</pre> | <pre>{<br> "enable_private_endpoint": true,<br> "enable_private_nodes": true,<br> "master_global_access_config": {<br> "enabled": true<br> }<br>}</pre> | no |
6969
| private\_ipv6\_google\_access | The desired state of IPv6 access to Google Services. By default, no private IPv6 access to or from Google Services (all access will be via IPv4). | `string` | `null` | no |
70-
| project | The ID of the project in which the resource belongs. If it is not provided, the provider project is used. | `string` | `null` | no |
70+
| project\_id | The ID of the project in which the resource belongs. If it is not provided, the provider project id is used. | `string` | `null` | no |
7171
| protect\_config | Enable GKE Protect workloads for this cluster. | <pre>object({<br> workload_config = object({<br> audit_mode = string<br> })<br> workload_vulnerability_mode = optional(string)<br> })</pre> | `null` | no |
7272
| release\_channel | Configuration for the release channel feature, which provides more control over automatic upgrades of your GKE clusters. | <pre>object({<br> channel = optional(string)<br> })</pre> | `null` | no |
7373
| remove\_default\_node\_pool | If true, deletes the default node pool upon cluster creation. If you're using google\_container\_node\_pool resources with no default node pool, this should be set to true. | `bool` | `true` | no |

0 commit comments

Comments
 (0)