Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Mar 29, 2025

This PR contains the following updates:

Package Type Update Change
confluent (source) required_provider minor 2.22.0 -> 2.23.0

Release Notes

confluentinc/terraform-provider-confluent (confluent)

v2.23.0

Full Changelog

New features:

  • Added new private_regional_rest_endpoints attribute for confluent_schema_registry_cluster data-source and confluent_schema_registry_clusters data-source.
  • Added new display_name argument for confluent_network_link_service data-source.
  • Released the reserved_cidr attribute and zone_info blocks in a General Availability lifecycle stage. It's available only for AWS networks with PEERING and TRANSITGATEWAY connection type.

Bug fixes:

Examples:


Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link

Terraform Format and Style 🖌failure

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_organization.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main: Reading...�[0m�[0m
�[0m�[1mdata.confluent_organization.main: Read complete after 0s [id=178cb46b-d78e-435d-8b6e-d8d023a08e6f]�[0m
�[0m�[1mdata.confluent_flink_region.main: Read complete after 0s [id=aws.us-east-2]�[0m
�[0m�[1mdata.confluent_flink_region.us-east-2: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version                     = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint                = (known after apply)
      �[32m+�[0m�[0m cloud                           = (known after apply)
      �[32m+�[0m�[0m display_name                    = (known after apply)
      �[32m+�[0m�[0m id                              = (known after apply)
      �[32m+�[0m�[0m kind                            = (known after apply)
      �[32m+�[0m�[0m package                         = (known after apply)
      �[32m+�[0m�[0m private_regional_rest_endpoints = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint           = (known after apply)
      �[32m+�[0m�[0m region                          = (known after apply)
      �[32m+�[0m�[0m resource_name                   = (known after apply)
      �[32m+�[0m�[0m rest_endpoint                   = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-flink-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-flink-api-key" {
      �[32m+�[0m�[0m description            = "Flink API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-flink-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.app-manager-kafka-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "app-manager-kafka-api-key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'app-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "app-manager-kafka-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.env-manager-schema-registry-api-key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "env-manager-schema-registry-api-key" {
      �[32m+�[0m�[0m description            = "Schema Registry API Key that is owned by 'env-manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "env-manager-schema-registry-api-key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ESSENTIALS"
        }
    }

�[1m  # confluent_flink_compute_pool.compute_pool_1�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "compute_pool_1" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "-workshop_compute_pool_1"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 10
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m basic {}

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_topic.flights_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_topic" "flights_avro" {
      �[32m+�[0m�[0m config           = (known after apply)
      �[32m+�[0m�[0m id               = (known after apply)
      �[32m+�[0m�[0m partitions_count = 10
      �[32m+�[0m�[0m rest_endpoint    = (known after apply)
      �[32m+�[0m�[0m topic_name       = "flights_avro"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m kafka_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_role_binding.app-manager-assigner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-assigner" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "Assigner"
    }

�[1m  # confluent_role_binding.app-manager-flink-developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-flink-developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkAdmin"
    }

�[1m  # confluent_role_binding.app-manager-kafka-cluster-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "app-manager-kafka-cluster-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.env-manager-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "env-manager-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_role_binding.statements-runner-environment-admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "statements-runner-environment-admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "EnvironmentAdmin"
    }

�[1m  # confluent_schema_registry_cluster_config.schema_registry_cluster_config�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_schema_registry_cluster_config" "schema_registry_cluster_config" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_service_account.app-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "app-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "workshop-app-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.env-manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "env-manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage 'Staging' environment"
      �[32m+�[0m�[0m display_name = "workshop-env-manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.statements-runner�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "statements-runner" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for running Flink Statements in 'inventory' Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-statements-runner"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_subject_config.flights_value_avro�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_subject_config" "flights_value_avro" {
      �[32m+�[0m�[0m compatibility_group = (known after apply)
      �[32m+�[0m�[0m compatibility_level = "BACKWARD"
      �[32m+�[0m�[0m id                  = (known after apply)
      �[32m+�[0m�[0m rest_endpoint       = (known after apply)
      �[32m+�[0m�[0m subject_name        = "flights_avro-value"

      �[32m+�[0m�[0m credentials {
          �[32m+�[0m�[0m key    = (sensitive value)
          �[32m+�[0m�[0m secret = (sensitive value)
        }

      �[32m+�[0m�[0m schema_registry_cluster {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1mPlan:�[0m 17 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m acks                                          = "all"
  �[32m+�[0m�[0m bootstrap_servers                             = (known after apply)
  �[32m+�[0m�[0m client_dns_lookup                             = "use_all_dns_ips"
  �[32m+�[0m�[0m environment                                   = "cloud"
  �[32m+�[0m�[0m sasl_jaas_config                              = (known after apply)
  �[32m+�[0m�[0m sasl_mechanism                                = "PLAIN"
  �[32m+�[0m�[0m schema_registry_basic_auth_credentials_source = "USER_INFO"
  �[32m+�[0m�[0m schema_registry_basic_auth_user_info          = (known after apply)
  �[32m+�[0m�[0m schema_registry_url                           = (known after apply)
  �[32m+�[0m�[0m security_protocol                             = "SASL_SSL"
  �[32m+�[0m�[0m session_timeout_ms                            = 45000
  �[32m+�[0m�[0m topic_name                                    = "flights_avro"
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot merged commit bc21b6d into main Mar 29, 2025
5 checks passed
@renovate renovate bot deleted the renovate/confluent-2.x branch March 29, 2025 06:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant