-
Notifications
You must be signed in to change notification settings - Fork 1k
Open
Labels
Description
Terraform Version, Provider Version and Kubernetes Version
Terraform version: v1.8.0
Kubernetes provider version: 2.36.0
Kubernetes version: v1.33.5-eks-3cfe0ce
Affected Resource(s)
Terraform Configuration Files
resource "kubernetes_manifest" "warpstream_agent_monitor" {
count = var.enable_service_monitor ? 1 : 0
manifest = {
apiVersion = "monitoring.coreos.com/v1"
kind = "ServiceMonitor"
metadata = {
name = "warpstream-agent-monitor-${var.cluster_name}-${var.agent_prefix}"
labels = {
instance = "primary"
}
namespace = local.ws_namespace
}
spec = {
endpoints = [
{
port = "http"
relabelings = [
{
action = "replace"
targetLabel = "ws_cluster"
replacement = "${var.cluster_name}"
},
{
action = "replace"
targetLabel = "agent"
replacement = "${var.agent_prefix}"
}
]
}
]
namespaceSelector = {
matchNames = [local.ws_namespace]
}
selector = {
matchExpressions = [
{
key = "app.kubernetes.io/instance"
operator = "In"
values = ["${var.cluster_name}-${var.agent_prefix}"]
}
]
}
}
}
}Debug Output
Panic Output
│ Error: Plugin error
│
│ with module.warpstream.kubernetes_manifest.warpstream_agent_monitor[0],
│ on ../../../../modules/kafka-warpstream-agent/warpstream.tf line 79, in resource "kubernetes_manifest" "warpstream_agent_monitor":
│ 79: resource "kubernetes_manifest" "warpstream_agent_monitor" {
│
│ The plugin returned an unexpected error from plugin.(*GRPCProvider).UpgradeResourceState: rpc
│ error: code = Unknown desc = failed to determine resource type ID: cannot get OpenAPI foundry:
│ failed get OpenAPI spec: context deadline exceeded
Steps to Reproduce
The weird thing is that it sometimes happened, sometimes not happen. But it is consistent when I connect to certain network with less bandwidth(not sure if it is related).
Expected Behavior
What should have happened?
Just successful apply
Actual Behavior
What actually happened?
Get the error of failed get OpenAPI spec: context deadline exceeded
Important Factoids
One thing I observed is that for the following command:
time kubectl get --raw /openapi/v2 > /tmp/openapi.json
This is the normal result:
0.31s user 0.26s system 4% cpu 12.020 total
Whereas in the case of applying error, the total can be more that 1 min.
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment