Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Apr 19, 2025

This PR contains the following updates:

Package Type Update Change
confluent (source) required_provider minor 2.24.0 -> 2.25.0

Release Notes

confluentinc/terraform-provider-confluent (confluent)

v2.25.0

Full Changelog

New features:

  • Added EA OAuth support for most Confluent Provider resources and data-sources with instructions.

Bug fixes:

  • Fixed a bug for confluent_kafka_cluster_config resource that the editable ssl.enabled.protocols cluster setting could not be updated issue.
  • Updated the docs of confluent_tableflow_topic resource to reference additional examples.
  • Updated the docs of confluent_schema resource and data-source with additional notes.
  • Updated the docs of confluent_certificate_authority resource and data-source with correct encoded type for certificate_chain and crl_chain.
  • Added additional note for the Confluent Provider 2.0.0 Upgrade Guide.

Examples:


Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link

Terraform Format and Style 🖌failure

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_flink_region.us-east-2: Reading...�[0m�[0m
�[0m�[1mdata.confluent_organization.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Read complete after 0s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_organization.main: Read complete after 0s [id=178cb46b-d78e-435d-8b6e-d8d023a08e6f]�[0m
�[0m�[1mdata.confluent_flink_region.main: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version                     = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint                = (known after apply)
      �[32m+�[0m�[0m cloud                           = (known after apply)
      �[32m+�[0m�[0m display_name                    = (known after apply)
      �[32m+�[0m�[0m id                              = (known after apply)
      �[32m+�[0m�[0m kind                            = (known after apply)
      �[32m+�[0m�[0m package                         = (known after apply)
      �[32m+�[0m�[0m private_regional_rest_endpoints = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint           = (known after apply)
      �[32m+�[0m�[0m region                          = (known after apply)
      �[32m+�[0m�[0m resource_name                   = (known after apply)
      �[32m+�[0m�[0m rest_endpoint                   = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-flink-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-flink-api-key" {
      �[32m+�[0m�[0m description            = "Flink API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-flink-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-kafka-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-kafka-api-key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-kafka-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.env-manager-schema-registry-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "env-manager-schema-registry-api-key" {
      �[32m+�[0m�[0m description            = "Schema Registry API Key that is owned by 'env-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "env-manager-schema-registry-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ESSENTIALS"
        }
    }

�[1m  # confluent_flink_compute_pool.compute_pool_1�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "compute_pool_1" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "-workshop_compute_pool_1"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 10
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m basic {}

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_topic.flights_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_topic" "flights_avro" {
      �[32m+�[0m�[0m config           = (known after apply)
      �[32m+�[0m�[0m id               = (known after apply)
      �[32m+�[0m�[0m partitions_count = 10
      �[32m+�[0m�[0m rest_endpoint    = (known after apply)
      �[32m+�[0m�[0m topic_name       = "flights_avro"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m kafka_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_role_binding.app-manager-assigner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-assigner" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "Assigner"
    }

�[1m  # confluent_role_binding.app-manager-flink-developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-flink-developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkAdmin"
    }

�[1m  # confluent_role_binding.app-manager-kafka-cluster-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-kafka-cluster-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.env-manager-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "env-manager-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_role_binding.statements-runner-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "statements-runner-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_schema_registry_cluster_config.schema_registry_cluster_config�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_schema_registry_cluster_config" "schema_registry_cluster_config" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_service_account.app-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "app-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "workshop-app-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.env-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "env-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage 'Staging' environment"
      �[32m+�[0m�[0m display_name = "workshop-env-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.statements-runner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "statements-runner" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for running Flink Statements in 'inventory' Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-statements-runner"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_subject_config.flights_value_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_subject_config" "flights_value_avro" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)
      �[32m+�[0m�[0m subject_name        = "flights_avro-value"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1mPlan:�[0m 17 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m acks                                          = "all"
  �[32m+�[0m�[0m bootstrap_servers                             = (known after apply)
  �[32m+�[0m�[0m client_dns_lookup                             = "use_all_dns_ips"
  �[32m+�[0m�[0m environment                                   = "cloud"
  �[32m+�[0m�[0m sasl_jaas_config                              = (known after apply)
  �[32m+�[0m�[0m sasl_mechanism                                = "PLAIN"
  �[32m+�[0m�[0m schema_registry_basic_auth_credentials_source = "USER_INFO"
  �[32m+�[0m�[0m schema_registry_basic_auth_user_info          = (known after apply)
  �[32m+�[0m�[0m schema_registry_url                           = (known after apply)
  �[32m+�[0m�[0m security_protocol                             = "SASL_SSL"
  �[32m+�[0m�[0m session_timeout_ms                            = 45000
  �[32m+�[0m�[0m topic_name                                    = "flights_avro"
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/confluent-2.x branch from 9d3dce9 to f4aa25e Compare April 19, 2025 06:51
@github-actions
Copy link

Terraform Format and Style 🖌failure

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_organization.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_organization.main: Read complete after 0s [id=178cb46b-d78e-435d-8b6e-d8d023a08e6f]�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Read complete after 0s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_flink_region.main: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version                     = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint                = (known after apply)
      �[32m+�[0m�[0m cloud                           = (known after apply)
      �[32m+�[0m�[0m display_name                    = (known after apply)
      �[32m+�[0m�[0m id                              = (known after apply)
      �[32m+�[0m�[0m kind                            = (known after apply)
      �[32m+�[0m�[0m package                         = (known after apply)
      �[32m+�[0m�[0m private_regional_rest_endpoints = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint           = (known after apply)
      �[32m+�[0m�[0m region                          = (known after apply)
      �[32m+�[0m�[0m resource_name                   = (known after apply)
      �[32m+�[0m�[0m rest_endpoint                   = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-flink-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-flink-api-key" {
      �[32m+�[0m�[0m description            = "Flink API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-flink-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-kafka-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-kafka-api-key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-kafka-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.env-manager-schema-registry-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "env-manager-schema-registry-api-key" {
      �[32m+�[0m�[0m description            = "Schema Registry API Key that is owned by 'env-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "env-manager-schema-registry-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ESSENTIALS"
        }
    }

�[1m  # confluent_flink_compute_pool.compute_pool_1�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "compute_pool_1" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "-workshop_compute_pool_1"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 10
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m basic {}

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_topic.flights_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_topic" "flights_avro" {
      �[32m+�[0m�[0m config           = (known after apply)
      �[32m+�[0m�[0m id               = (known after apply)
      �[32m+�[0m�[0m partitions_count = 10
      �[32m+�[0m�[0m rest_endpoint    = (known after apply)
      �[32m+�[0m�[0m topic_name       = "flights_avro"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m kafka_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_role_binding.app-manager-assigner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-assigner" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "Assigner"
    }

�[1m  # confluent_role_binding.app-manager-flink-developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-flink-developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkAdmin"
    }

�[1m  # confluent_role_binding.app-manager-kafka-cluster-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-kafka-cluster-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.env-manager-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "env-manager-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_role_binding.statements-runner-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "statements-runner-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_schema_registry_cluster_config.schema_registry_cluster_config�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_schema_registry_cluster_config" "schema_registry_cluster_config" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_service_account.app-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "app-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "workshop-app-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.env-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "env-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage 'Staging' environment"
      �[32m+�[0m�[0m display_name = "workshop-env-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.statements-runner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "statements-runner" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for running Flink Statements in 'inventory' Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-statements-runner"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_subject_config.flights_value_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_subject_config" "flights_value_avro" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)
      �[32m+�[0m�[0m subject_name        = "flights_avro-value"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1mPlan:�[0m 17 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m acks                                          = "all"
  �[32m+�[0m�[0m bootstrap_servers                             = (known after apply)
  �[32m+�[0m�[0m client_dns_lookup                             = "use_all_dns_ips"
  �[32m+�[0m�[0m environment                                   = "cloud"
  �[32m+�[0m�[0m sasl_jaas_config                              = (known after apply)
  �[32m+�[0m�[0m sasl_mechanism                                = "PLAIN"
  �[32m+�[0m�[0m schema_registry_basic_auth_credentials_source = "USER_INFO"
  �[32m+�[0m�[0m schema_registry_basic_auth_user_info          = (known after apply)
  �[32m+�[0m�[0m schema_registry_url                           = (known after apply)
  �[32m+�[0m�[0m security_protocol                             = "SASL_SSL"
  �[32m+�[0m�[0m session_timeout_ms                            = 45000
  �[32m+�[0m�[0m topic_name                                    = "flights_avro"
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/confluent-2.x branch from f4aa25e to 9fb89c8 Compare April 19, 2025 10:46
@github-actions
Copy link

Terraform Format and Style 🖌failure

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_flink_region.us-east-2: Reading...�[0m�[0m
�[0m�[1mdata.confluent_organization.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Read complete after 0s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_organization.main: Read complete after 0s [id=178cb46b-d78e-435d-8b6e-d8d023a08e6f]�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version                     = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint                = (known after apply)
      �[32m+�[0m�[0m cloud                           = (known after apply)
      �[32m+�[0m�[0m display_name                    = (known after apply)
      �[32m+�[0m�[0m id                              = (known after apply)
      �[32m+�[0m�[0m kind                            = (known after apply)
      �[32m+�[0m�[0m package                         = (known after apply)
      �[32m+�[0m�[0m private_regional_rest_endpoints = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint           = (known after apply)
      �[32m+�[0m�[0m region                          = (known after apply)
      �[32m+�[0m�[0m resource_name                   = (known after apply)
      �[32m+�[0m�[0m rest_endpoint                   = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-flink-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-flink-api-key" {
      �[32m+�[0m�[0m description            = "Flink API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-flink-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-kafka-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-kafka-api-key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-kafka-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.env-manager-schema-registry-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "env-manager-schema-registry-api-key" {
      �[32m+�[0m�[0m description            = "Schema Registry API Key that is owned by 'env-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "env-manager-schema-registry-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ESSENTIALS"
        }
    }

�[1m  # confluent_flink_compute_pool.compute_pool_1�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "compute_pool_1" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "-workshop_compute_pool_1"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 10
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m basic {}

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_topic.flights_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_topic" "flights_avro" {
      �[32m+�[0m�[0m config           = (known after apply)
      �[32m+�[0m�[0m id               = (known after apply)
      �[32m+�[0m�[0m partitions_count = 10
      �[32m+�[0m�[0m rest_endpoint    = (known after apply)
      �[32m+�[0m�[0m topic_name       = "flights_avro"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m kafka_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_role_binding.app-manager-assigner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-assigner" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "Assigner"
    }

�[1m  # confluent_role_binding.app-manager-flink-developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-flink-developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkAdmin"
    }

�[1m  # confluent_role_binding.app-manager-kafka-cluster-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-kafka-cluster-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.env-manager-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "env-manager-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_role_binding.statements-runner-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "statements-runner-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_schema_registry_cluster_config.schema_registry_cluster_config�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_schema_registry_cluster_config" "schema_registry_cluster_config" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_service_account.app-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "app-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "workshop-app-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.env-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "env-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage 'Staging' environment"
      �[32m+�[0m�[0m display_name = "workshop-env-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.statements-runner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "statements-runner" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for running Flink Statements in 'inventory' Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-statements-runner"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_subject_config.flights_value_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_subject_config" "flights_value_avro" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)
      �[32m+�[0m�[0m subject_name        = "flights_avro-value"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1mPlan:�[0m 17 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m acks                                          = "all"
  �[32m+�[0m�[0m bootstrap_servers                             = (known after apply)
  �[32m+�[0m�[0m client_dns_lookup                             = "use_all_dns_ips"
  �[32m+�[0m�[0m environment                                   = "cloud"
  �[32m+�[0m�[0m sasl_jaas_config                              = (known after apply)
  �[32m+�[0m�[0m sasl_mechanism                                = "PLAIN"
  �[32m+�[0m�[0m schema_registry_basic_auth_credentials_source = "USER_INFO"
  �[32m+�[0m�[0m schema_registry_basic_auth_user_info          = (known after apply)
  �[32m+�[0m�[0m schema_registry_url                           = (known after apply)
  �[32m+�[0m�[0m security_protocol                             = "SASL_SSL"
  �[32m+�[0m�[0m session_timeout_ms                            = 45000
  �[32m+�[0m�[0m topic_name                                    = "flights_avro"
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/confluent-2.x branch from 9fb89c8 to 290ddd5 Compare April 26, 2025 05:55
@github-actions
Copy link

Terraform Format and Style 🖌failure

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_flink_region.us-east-2: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_organization.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Read complete after 0s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_organization.main: Read complete after 0s [id=178cb46b-d78e-435d-8b6e-d8d023a08e6f]�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version                     = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint                = (known after apply)
      �[32m+�[0m�[0m cloud                           = (known after apply)
      �[32m+�[0m�[0m display_name                    = (known after apply)
      �[32m+�[0m�[0m id                              = (known after apply)
      �[32m+�[0m�[0m kind                            = (known after apply)
      �[32m+�[0m�[0m package                         = (known after apply)
      �[32m+�[0m�[0m private_regional_rest_endpoints = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint           = (known after apply)
      �[32m+�[0m�[0m region                          = (known after apply)
      �[32m+�[0m�[0m resource_name                   = (known after apply)
      �[32m+�[0m�[0m rest_endpoint                   = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-flink-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-flink-api-key" {
      �[32m+�[0m�[0m description            = "Flink API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-flink-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-kafka-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-kafka-api-key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-kafka-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.env-manager-schema-registry-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "env-manager-schema-registry-api-key" {
      �[32m+�[0m�[0m description            = "Schema Registry API Key that is owned by 'env-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "env-manager-schema-registry-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ESSENTIALS"
        }
    }

�[1m  # confluent_flink_compute_pool.compute_pool_1�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "compute_pool_1" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "-workshop_compute_pool_1"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 10
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m basic {}

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_topic.flights_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_topic" "flights_avro" {
      �[32m+�[0m�[0m config           = (known after apply)
      �[32m+�[0m�[0m id               = (known after apply)
      �[32m+�[0m�[0m partitions_count = 10
      �[32m+�[0m�[0m rest_endpoint    = (known after apply)
      �[32m+�[0m�[0m topic_name       = "flights_avro"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m kafka_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_role_binding.app-manager-assigner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-assigner" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "Assigner"
    }

�[1m  # confluent_role_binding.app-manager-flink-developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-flink-developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkAdmin"
    }

�[1m  # confluent_role_binding.app-manager-kafka-cluster-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-kafka-cluster-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.env-manager-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "env-manager-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_role_binding.statements-runner-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "statements-runner-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_schema_registry_cluster_config.schema_registry_cluster_config�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_schema_registry_cluster_config" "schema_registry_cluster_config" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_service_account.app-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "app-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "workshop-app-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.env-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "env-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage 'Staging' environment"
      �[32m+�[0m�[0m display_name = "workshop-env-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.statements-runner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "statements-runner" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for running Flink Statements in 'inventory' Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-statements-runner"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_subject_config.flights_value_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_subject_config" "flights_value_avro" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)
      �[32m+�[0m�[0m subject_name        = "flights_avro-value"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1mPlan:�[0m 17 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m acks                                          = "all"
  �[32m+�[0m�[0m bootstrap_servers                             = (known after apply)
  �[32m+�[0m�[0m client_dns_lookup                             = "use_all_dns_ips"
  �[32m+�[0m�[0m environment                                   = "cloud"
  �[32m+�[0m�[0m sasl_jaas_config                              = (known after apply)
  �[32m+�[0m�[0m sasl_mechanism                                = "PLAIN"
  �[32m+�[0m�[0m schema_registry_basic_auth_credentials_source = "USER_INFO"
  �[32m+�[0m�[0m schema_registry_basic_auth_user_info          = (known after apply)
  �[32m+�[0m�[0m schema_registry_url                           = (known after apply)
  �[32m+�[0m�[0m security_protocol                             = "SASL_SSL"
  �[32m+�[0m�[0m session_timeout_ms                            = 45000
  �[32m+�[0m�[0m topic_name                                    = "flights_avro"
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/confluent-2.x branch from 290ddd5 to 0c0aaa4 Compare April 26, 2025 10:45
@github-actions
Copy link

Terraform Format and Style 🖌failure

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_organization.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Read complete after 1s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_flink_region.main: Read complete after 1s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_organization.main: Read complete after 1s [id=178cb46b-d78e-435d-8b6e-d8d023a08e6f]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version                     = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint                = (known after apply)
      �[32m+�[0m�[0m cloud                           = (known after apply)
      �[32m+�[0m�[0m display_name                    = (known after apply)
      �[32m+�[0m�[0m id                              = (known after apply)
      �[32m+�[0m�[0m kind                            = (known after apply)
      �[32m+�[0m�[0m package                         = (known after apply)
      �[32m+�[0m�[0m private_regional_rest_endpoints = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint           = (known after apply)
      �[32m+�[0m�[0m region                          = (known after apply)
      �[32m+�[0m�[0m resource_name                   = (known after apply)
      �[32m+�[0m�[0m rest_endpoint                   = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-flink-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-flink-api-key" {
      �[32m+�[0m�[0m description            = "Flink API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-flink-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-kafka-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-kafka-api-key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-kafka-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.env-manager-schema-registry-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "env-manager-schema-registry-api-key" {
      �[32m+�[0m�[0m description            = "Schema Registry API Key that is owned by 'env-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "env-manager-schema-registry-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ESSENTIALS"
        }
    }

�[1m  # confluent_flink_compute_pool.compute_pool_1�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "compute_pool_1" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "-workshop_compute_pool_1"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 10
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m basic {}

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_topic.flights_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_topic" "flights_avro" {
      �[32m+�[0m�[0m config           = (known after apply)
      �[32m+�[0m�[0m id               = (known after apply)
      �[32m+�[0m�[0m partitions_count = 10
      �[32m+�[0m�[0m rest_endpoint    = (known after apply)
      �[32m+�[0m�[0m topic_name       = "flights_avro"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m kafka_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_role_binding.app-manager-assigner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-assigner" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "Assigner"
    }

�[1m  # confluent_role_binding.app-manager-flink-developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-flink-developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkAdmin"
    }

�[1m  # confluent_role_binding.app-manager-kafka-cluster-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-kafka-cluster-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.env-manager-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "env-manager-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_role_binding.statements-runner-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "statements-runner-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_schema_registry_cluster_config.schema_registry_cluster_config�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_schema_registry_cluster_config" "schema_registry_cluster_config" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_service_account.app-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "app-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "workshop-app-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.env-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "env-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage 'Staging' environment"
      �[32m+�[0m�[0m display_name = "workshop-env-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.statements-runner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "statements-runner" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for running Flink Statements in 'inventory' Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-statements-runner"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_subject_config.flights_value_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_subject_config" "flights_value_avro" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)
      �[32m+�[0m�[0m subject_name        = "flights_avro-value"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1mPlan:�[0m 17 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m acks                                          = "all"
  �[32m+�[0m�[0m bootstrap_servers                             = (known after apply)
  �[32m+�[0m�[0m client_dns_lookup                             = "use_all_dns_ips"
  �[32m+�[0m�[0m environment                                   = "cloud"
  �[32m+�[0m�[0m sasl_jaas_config                              = (known after apply)
  �[32m+�[0m�[0m sasl_mechanism                                = "PLAIN"
  �[32m+�[0m�[0m schema_registry_basic_auth_credentials_source = "USER_INFO"
  �[32m+�[0m�[0m schema_registry_basic_auth_user_info          = (known after apply)
  �[32m+�[0m�[0m schema_registry_url                           = (known after apply)
  �[32m+�[0m�[0m security_protocol                             = "SASL_SSL"
  �[32m+�[0m�[0m session_timeout_ms                            = 45000
  �[32m+�[0m�[0m topic_name                                    = "flights_avro"
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/confluent-2.x branch from 0c0aaa4 to 1383204 Compare May 5, 2025 22:11
@github-actions
Copy link

github-actions bot commented May 5, 2025

Terraform Format and Style 🖌failure

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_organization.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Read complete after 0s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_organization.main: Read complete after 0s [id=178cb46b-d78e-435d-8b6e-d8d023a08e6f]�[0m
�[0m�[1mdata.confluent_flink_region.main: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version                     = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint                = (known after apply)
      �[32m+�[0m�[0m cloud                           = (known after apply)
      �[32m+�[0m�[0m display_name                    = (known after apply)
      �[32m+�[0m�[0m id                              = (known after apply)
      �[32m+�[0m�[0m kind                            = (known after apply)
      �[32m+�[0m�[0m package                         = (known after apply)
      �[32m+�[0m�[0m private_regional_rest_endpoints = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint           = (known after apply)
      �[32m+�[0m�[0m region                          = (known after apply)
      �[32m+�[0m�[0m resource_name                   = (known after apply)
      �[32m+�[0m�[0m rest_endpoint                   = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-flink-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-flink-api-key" {
      �[32m+�[0m�[0m description            = "Flink API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-flink-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-kafka-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-kafka-api-key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-kafka-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.env-manager-schema-registry-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "env-manager-schema-registry-api-key" {
      �[32m+�[0m�[0m description            = "Schema Registry API Key that is owned by 'env-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "env-manager-schema-registry-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ESSENTIALS"
        }
    }

�[1m  # confluent_flink_compute_pool.compute_pool_1�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "compute_pool_1" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "-workshop_compute_pool_1"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 10
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m basic {}

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_topic.flights_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_topic" "flights_avro" {
      �[32m+�[0m�[0m config           = (known after apply)
      �[32m+�[0m�[0m id               = (known after apply)
      �[32m+�[0m�[0m partitions_count = 10
      �[32m+�[0m�[0m rest_endpoint    = (known after apply)
      �[32m+�[0m�[0m topic_name       = "flights"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m kafka_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_role_binding.app-manager-assigner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-assigner" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "Assigner"
    }

�[1m  # confluent_role_binding.app-manager-flink-developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-flink-developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkAdmin"
    }

�[1m  # confluent_role_binding.app-manager-kafka-cluster-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-kafka-cluster-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.env-manager-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "env-manager-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_role_binding.statements-runner-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "statements-runner-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_schema_registry_cluster_config.schema_registry_cluster_config�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_schema_registry_cluster_config" "schema_registry_cluster_config" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_service_account.app-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "app-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "workshop-app-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.env-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "env-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage 'Staging' environment"
      �[32m+�[0m�[0m display_name = "workshop-env-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.statements-runner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "statements-runner" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for running Flink Statements in 'inventory' Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-statements-runner"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_subject_config.flights_value_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_subject_config" "flights_value_avro" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)
      �[32m+�[0m�[0m subject_name        = "flights-value"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1mPlan:�[0m 17 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m acks                                          = "all"
  �[32m+�[0m�[0m bootstrap_servers                             = (known after apply)
  �[32m+�[0m�[0m client_dns_lookup                             = "use_all_dns_ips"
  �[32m+�[0m�[0m environment                                   = "cloud"
  �[32m+�[0m�[0m flink_api_key                                 = (known after apply)
  �[32m+�[0m�[0m flink_api_secret                              = (known after apply)
  �[32m+�[0m�[0m sasl_jaas_config                              = (known after apply)
  �[32m+�[0m�[0m sasl_mechanism                                = "PLAIN"
  �[32m+�[0m�[0m schema_registry_basic_auth_credentials_source = "USER_INFO"
  �[32m+�[0m�[0m schema_registry_basic_auth_user_info          = (known after apply)
  �[32m+�[0m�[0m schema_registry_url                           = (known after apply)
  �[32m+�[0m�[0m security_protocol                             = "SASL_SSL"
  �[32m+�[0m�[0m session_timeout_ms                            = 45000
  �[32m+�[0m�[0m topic_name                                    = "flights"
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@gAmUssA gAmUssA merged commit e381b09 into main May 5, 2025
1 of 5 checks passed
@renovate renovate bot deleted the renovate/confluent-2.x branch May 5, 2025 22:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants