-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Description
Description
so this is a duplicate of this but the original was closed
#3558
the issue lies with when trying to spin up new clusters with eks_managed_node_groups - without that blocks it works just fine
from what i saw
the issues comes from these line (in a chain of depends)
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/modules/eks-managed-node-group/main.tf#L5
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/node_groups.tf#L282
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/main.tf#L21
the datasource of the partition inside the managed node groups is calling a datasource which is also called before on the main of the eks
- [ V] ✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
- Remove the local
.terraformdirectory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/ - Re-initialize the project root to pull down modules:
terraform init - Re-attempt your terraform plan or apply and check if the issue still persists
Versions
-
Module version [Required]:
-
Terraform version:
- Provider version(s):
Reproduction Code [Required]
module "eks" {
source = "terraform-aws-modules/eks/aws"
name = local.cluster_name
kubernetes_version = var.cluster_version
create_kms_key = true
enable_kms_key_rotation = true
create_iam_role = true
iam_role_name = "${local.cluster_name}"
iam_role_use_name_prefix = true
vpc_id = local.vpc_id
subnet_ids = data.aws_subnets.this.ids
endpoint_public_access = false
deletion_protection = var.deletion_protection != null ? var.deletion_protection : false
eks_managed_node_groups = {
one = {
instance_types = [var.instance_types]
min_size = var.min_nodes
create = true
max_size = var.max_nodes
desired_size = var.current_nodes
}
}
authentication_mode = "API_AND_CONFIG_MAP"
}
Steps to reproduce the behavior:
No yesExpected behavior
terraform creates the cluster
Actual behavior
terraform is not able to compute resouces
Terminal Output Screenshot(s)
│ Error: Invalid count argument
│
│ on .terraform\modules\eks\modules\eks-managed-node-group\main.tf line 2, in data "aws_partition" "current":
│ 2: count = var.create && var.partition == "" ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources
│ that the count depends on.
╵
╷
│ Error: Invalid count argument
│
│ on .terraform\modules\eks\modules\eks-managed-node-group\main.tf line 5, in data "aws_caller_identity" "current":
│ 5: count = var.create && var.account_id == "" ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources
│ that the count depends on.
Additional context
I think removing the datasource of the partition and the account_id from the node groups module should solve this
since any way the root module calls for it and the you cant create a cluster without node groups
seems kind of duplicate calls for the datasource
after reviewing all the conditions - i dont think the datasource should be in the node groups module
with looking for var.create/local.create
it seems that the create in the root module will send the partition on to the node groups module
to able to run the module without the main eks module - the partition should run on its own regardless of the eks or any other format