You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| autoscaling | Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.<br> - min\_node\_count: Minimum number of nodes per zone in the NodePool. Must be >=0 and <= max\_node\_count. Cannot be used with total limits.<br> - max\_node\_count: Maximum number of nodes per zone in the NodePool. Must be >= min\_node\_count. Cannot be used with total limits.<br> - total\_min\_node\_count: Total minimum number of nodes in the NodePool. Must be >=0 and <= total\_max\_node\_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.<br> - total\_max\_node\_count: Total maximum number of nodes in the NodePool. Must be >= total\_min\_node\_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.<br> - location\_policy: Location policy specifies the algorithm used when scaling-up the node pool. Location policy is supported only in 1.24.1+ clusters.<br> - "BALANCED" - Is a best effort policy that aims to balance the sizes of available zones.<br> - "ANY" - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduce preemption risk for Spot VMs. | <pre>object({<br> min_node_count = optional(number)<br> max_node_count = optional(number)<br> total_min_node_count = optional(number)<br> total_max_node_count = optional(number)<br> location_policy = optional(string)<br> })</pre> | <pre>{<br> "max_node_count": 100,<br> "min_node_count": 1<br>}</pre> | no |
10
-
| cluster | The cluster to create the node pool for. Cluster must be present in location provided for clusters. May be specified in the format projects/{{project}}/locations/{{location}}/clusters/{{cluster}} or as just the name of the cluster. |`string`| n/a | yes |
10
+
| cluster | The cluster to create the node pool for. Cluster must be present in location provided for clusters. May be specified in the format projects/{{project\_id}}/locations/{{location}}/clusters/{{cluster}} or as just the name of the cluster. |`string`| n/a | yes |
11
11
| initial\_node\_count | The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource. WARNING: Resizing your node pool manually may change this value in your existing cluster, which will trigger destruction and recreation on the next Terraform run (to rectify the discrepancy). If you don't need this value, don't set it. |`number`|`null`| no |
12
12
| kubernetes\_version | The Kubernetes version for the nodes in this pool. Note that if this field and auto\_upgrade are both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it's recommended that you specify explicit versions as Terraform will see spurious diffs when fuzzy versions are used. See the google\_container\_engine\_versions data source's version\_prefix field to approximate fuzzy versions in a Terraform-compatible way. |`string`|`null`| no |
13
13
| location | The location (region or zone) of the cluster. |`string`|`null`| no |
@@ -20,7 +20,7 @@
20
20
| node\_count | The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside autoscaling. |`number`|`1`| no |
21
21
| node\_locations | The list of zones in which the node pool's nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If unspecified, the cluster-level node\_locations will be used. Note: node\_locations will not revert to the cluster's default set of zones upon being unset. You must manually reconcile the list of zones with your cluster. |`list(string)`|`null`| no |
22
22
| placement\_policy | Specifies a custom placement policy for the nodes.<br> - type: The type of the policy. Supports a single value: COMPACT. Specifying COMPACT placement policy type places node pool's nodes in a closer physical proximity in order to reduce network latency between nodes.<br> - policy\_name: If set, refers to the name of a custom resource policy supplied by the user. The resource policy must be in the same project and region as the node pool. If not found, InvalidArgument error is returned.<br> - tpu\_topology: The TPU topology like "2x4" or "2x2x2". | <pre>object({<br> type = string<br> policy_name = optional(string)<br> tpu_topology = optional(string)<br> })</pre> |`null`| no |
23
-
| project | The ID of the project in which to create the node pool. |`string`| n/a | yes |
23
+
| project\_id| The ID of the project in which to create the node pool. |`string`| n/a | yes |
24
24
| queued\_provisioning | Specifies node pool-level settings of queued provisioning.<br> - enabled (Required) - Makes nodes obtainable through the ProvisioningRequest API exclusively. | <pre>object({<br> enabled = bool<br> })</pre> |`null`| no |
| upgrade\_settings | Specify node upgrade settings to change how GKE upgrades nodes. The maximum number of nodes upgraded simultaneously is limited to 20.<br> - max\_surge:he number of additional nodes that can be added to the node pool during an upgrade. Increasing max\_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.<br> - max\_unavailable - (Optional) The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max\_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.<br> - strategy - (Default SURGE) The upgrade strategy to be used for upgrading the nodes.<br> - blue\_green\_settings: The settings to adjust blue green upgrades.<br> - standard\_rollout\_policy: Specifies the standard policy settings for blue-green upgrades.<br> - batch\_percentage: Percentage of the blue pool nodes to drain in a batch.<br> - batch\_node\_count:Number of blue nodes to drain in a batch.<br> - batch\_soak\_duration: Soak time after each batch gets drained.<br> - local\_ssd\_encryption\_mode: Possible Local SSD encryption modes: Accepted values are:<br> - STANDARD\_ENCRYPTION: The given node will be encrypted using keys managed by Google infrastructure and the keys wll be deleted when the node is deleted.<br> - EPHEMERAL\_KEY\_ENCRYPTION: The given node will opt-in for using ephemeral key for encrypting Local SSDs. The Local SSDs will not be able to recover data in case of node crash.<br> - node\_pool\_soak\_duration: Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up. | <pre>object({<br> max_surge = optional(number)<br> max_unavailable = optional(number)<br> strategy = optional(string)<br> blue_green_settings = optional(object({<br> standard_rollout_policy = object({<br> batch_percentage = optional(number)<br> batch_node_count = optional(number)<br> batch_soak_duration = optional(string)<br> })<br> node_pool_soak_duration = optional(string)<br> }))<br> })</pre> | <pre>{<br> "max_surge": 1,<br> "max_unavailable": 0,<br> "strategy": "SURGE"<br>}</pre> | no |
@@ -29,7 +29,7 @@
29
29
30
30
| Name | Description |
31
31
|------|-------------|
32
-
| id | an identifier for the resource with format {{project}}/{{location}}/{{cluster}}/{{name}} |
32
+
| id | an identifier for the resource with format {{project\_id}}/{{location}}/{{cluster}}/{{name}} |
33
33
| instance\_group\_urls | The resource URLs of the managed instance groups associated with this node pool. |
34
34
| managed\_instance\_group\_urls | List of instance group URLs which have been assigned to this node pool. |
Copy file name to clipboardExpand all lines: modules/gke-node-pool/metadata.yaml
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -144,7 +144,7 @@ spec:
144
144
description: The location (region or zone) of the cluster.
145
145
varType: string
146
146
- name: autoscaling
147
-
description: " Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.\n - min_node_count: Minimum number of nodes per zone in the NodePool. Must be >=0 and <= max_node_count. Cannot be used with total limits.\n - max_node_count: Maximum number of nodes per zone in the NodePool. Must be >= min_node_count. Cannot be used with total limits.\n - total_min_node_count: Total minimum number of nodes in the NodePool. Must be >=0 and <= total_max_node_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.\n - total_max_node_count: Total maximum number of nodes in the NodePool. Must be >= total_min_node_count. Cannot be used with per zone limits. Total size limits are supported only in 1.24.1+ clusters.\n - location_policy: Location policy specifies the algorithm used when scaling-up the node pool. Location policy is supported only in 1.24.1+ clusters.\n - \"BALANCED\" - Is a best effort policy that aims to balance the sizes of available zones.\n - \"ANY\" - Instructs the cluster autoscaler to prioritize utilization of unused reservations, and reduce preemption risk for Spot VMs.\n"
147
+
description: Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage.
| private\_ipv6\_google\_access | The desired state of IPv6 access to Google Services. By default, no private IPv6 access to or from Google Services (all access will be via IPv4). |`string`|`null`| no |
70
-
| project | The ID of the project in which the resource belongs. If it is not provided, the provider project is used. |`string`|`null`| no |
70
+
| project\_id| The ID of the project in which the resource belongs. If it is not provided, the provider project id is used. |`string`|`null`| no |
71
71
| protect\_config | Enable GKE Protect workloads for this cluster. | <pre>object({<br> workload_config = object({<br> audit_mode = string<br> })<br> workload_vulnerability_mode = optional(string)<br> })</pre> |`null`| no |
72
72
| release\_channel | Configuration for the release channel feature, which provides more control over automatic upgrades of your GKE clusters. | <pre>object({<br> channel = optional(string)<br> })</pre> |`null`| no |
73
73
| remove\_default\_node\_pool | If true, deletes the default node pool upon cluster creation. If you're using google\_container\_node\_pool resources with no default node pool, this should be set to true. |`bool`|`true`| no |
0 commit comments