Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Feb 27, 2025

This PR contains the following updates:

Package Type Update Change
confluent (source) required_provider minor 2.12.0 -> 2.18.0

Release Notes

confluentinc/terraform-provider-confluent (confluent)

v2.18.0

Compare Source

Full Changelog

Note:

  • Make sure to remove the "confluent.topic.type" topic setting from the config block attribute of your confluent_kafka_topic resource instances in your TF configuration if you can observe a related TF drift during the terraform plan command (#​427).

Bug fixes:

  • Fixed "Missing attribute "confluent.topic.type" of Kafka topic config in Terraform provider" issue (#​427).
  • Fixed "Resource recreation when using ALL_GOOGLE_APIS instead of all-google-apis for private service connect endpoint target" issue (#​544).
  • Added support for descriptive errors (displaying the raw response body when an error can't be parsed) for confluent_kafka_topic resource instances instead of showing "undefined response type".
  • Resolved TF drift for custom connectors.
  • Fixed "invalid reflect.Value" when displaying errors.
  • Updated TF docs.

v2.17.0

Compare Source

Full Changelog

New features:

Examples:

v2.16.0

Compare Source

Full Changelog

Bug fixes:

  • Fixed "error creating Schema: 403 Forbidden: Upgrade to Stream Governance Advanced package to use schema rules" issue (#​543).

v2.15.0

Compare Source

Full Changelog

New features:

Bug fixes:

  • Fixed "Terraform provider does not work well when deploying a Flink Model/Statement that uses sql.secrets.*" issue (#​397).
  • Fixed "Unable to import confluent_tag using Option #​2" issue (#​512).
  • Fixed "Unable to remove ruleset in confluent_schema" issue.

v2.14.0

Compare Source

Full Changelog

New features:

  • Updated confluent_flink_artifact resource and data-source to deprecate the class attribute and add the documentation_link attribute.
    The class attribute will be removed in the next major version of the provider (3.0.0). Refer to the Upgrade Guide for more details.

Bug fixes:

  • Fixed "Unable to create API key with managed_resource block" issue (#​538).

v2.13.0

Compare Source

Full Changelog

New features:

  • Updated confluent_api_key resource to support Tableflow API Keys.
  • Added support for resolving private DNS names from a DNS resolver within your own Google Cloud VPC via DNS forwarding. This feature enables fully-managed connectors to access endpoints using private DNS zones. For details, see DNS forwarding for Google Cloud Peering.
  • Added support for outbound Google Cloud Private Service Connect connections using Egress Private Service Connect Endpoints. Egress Private Service Connect Endpoints enable fully-managed Confluent connectors to access services from GCP Private Link Service providers such as Google, MongoDB, Snowflake, and others.
    With this capability, Confluent Cloud now supports private outbound connections for Dedicated clusters across all three cloud providers, AWS, Azure, and Google Cloud. For details, see Google Cloud Egress Private Service Connect Endpoints for Dedicated Clusters.

Bug fixes:

  • Resolved an issue with confluent_flink_artifact resource during the creation of the Presigned URL phase.
  • Fixed the "404 error in re-deploying schemas" issue (#​296).
  • Updated docs (#​506).
  • Resolved 1 Dependabot alert.

Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/confluent-2.x branch 2 times, most recently from 0ff8fa3 to 9effabc Compare February 27, 2025 04:54
@github-actions
Copy link

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖failure

Show Plan
�[0m�[1mdata.confluent_flink_region.main_flink_region: Reading...�[0m�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform planned the following actions, but then encountered a problem:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version           = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint      = (known after apply)
      �[32m+�[0m�[0m cloud                 = (known after apply)
      �[32m+�[0m�[0m display_name          = (known after apply)
      �[32m+�[0m�[0m id                    = (known after apply)
      �[32m+�[0m�[0m kind                  = (known after apply)
      �[32m+�[0m�[0m package               = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint = (known after apply)
      �[32m+�[0m�[0m region                = (known after apply)
      �[32m+�[0m�[0m resource_name         = (known after apply)
      �[32m+�[0m�[0m rest_endpoint         = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_developer_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_developer_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_developer' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_developer_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.sr_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "sr_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "SR API Key that is owned by 'sr_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "sr_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ADVANCED"
        }
    }

�[1m  # confluent_flink_compute_pool.main_flink_pool�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "main_flink_pool" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "main_flink_pool"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 5
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }

      �[32m+�[0m�[0m standard {}
    }

�[1m  # confluent_role_binding.fd_flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_flink_developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkDeveloper"
    }

�[1m  # confluent_role_binding.fd_kafka_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_kafka_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.fd_schema_registry_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_schema_registry_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_developer_read_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_read_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.kafka_developer_write_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_write_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_manager_kafka_cluster_admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_manager_kafka_cluster_admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.sr_manager_data_steward�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "sr_manager_data_steward" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DataSteward"
    }

�[1m  # confluent_service_account.flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "flink_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for flink developer"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-flink_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for developer using Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.sr_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "sr_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-sr_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1mPlan:�[0m 19 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m cloud                   = "AWS"
  �[32m+�[0m�[0m compute-pool-id         = (known after apply)
  �[32m+�[0m�[0m environment-id          = (known after apply)
  �[32m+�[0m�[0m kafka_bootstrap_servers = (known after apply)
  �[32m+�[0m�[0m kafka_sasl_jaas_config  = (known after apply)
  �[32m+�[0m�[0m organization-id         = "178cb46b-d78e-435d-8b6e-d8d023a08e6f"
  �[32m+�[0m�[0m region                  = "us-east-2"
  �[32m+�[0m�[0m registry_key            = (known after apply)
  �[32m+�[0m�[0m registry_secret         = (known after apply)
  �[32m+�[0m�[0m registry_url            = (known after apply)

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/confluent-2.x branch from 9effabc to 99a6598 Compare February 27, 2025 04:55
@github-actions
Copy link

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖failure

Show Plan
�[0m�[1mdata.confluent_flink_region.main_flink_region: Reading...�[0m�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform planned the following actions, but then encountered a problem:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version           = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint      = (known after apply)
      �[32m+�[0m�[0m cloud                 = (known after apply)
      �[32m+�[0m�[0m display_name          = (known after apply)
      �[32m+�[0m�[0m id                    = (known after apply)
      �[32m+�[0m�[0m kind                  = (known after apply)
      �[32m+�[0m�[0m package               = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint = (known after apply)
      �[32m+�[0m�[0m region                = (known after apply)
      �[32m+�[0m�[0m resource_name         = (known after apply)
      �[32m+�[0m�[0m rest_endpoint         = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_developer_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_developer_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_developer' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_developer_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.sr_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "sr_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "SR API Key that is owned by 'sr_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "sr_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ADVANCED"
        }
    }

�[1m  # confluent_flink_compute_pool.main_flink_pool�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "main_flink_pool" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "main_flink_pool"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 5
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }

      �[32m+�[0m�[0m standard {}
    }

�[1m  # confluent_role_binding.fd_flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_flink_developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkDeveloper"
    }

�[1m  # confluent_role_binding.fd_kafka_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_kafka_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.fd_schema_registry_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_schema_registry_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_developer_read_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_read_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.kafka_developer_write_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_write_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_manager_kafka_cluster_admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_manager_kafka_cluster_admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.sr_manager_data_steward�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "sr_manager_data_steward" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DataSteward"
    }

�[1m  # confluent_service_account.flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "flink_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for flink developer"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-flink_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for developer using Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.sr_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "sr_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-sr_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1mPlan:�[0m 19 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m cloud                   = "AWS"
  �[32m+�[0m�[0m compute-pool-id         = (known after apply)
  �[32m+�[0m�[0m environment-id          = (known after apply)
  �[32m+�[0m�[0m kafka_bootstrap_servers = (known after apply)
  �[32m+�[0m�[0m kafka_sasl_jaas_config  = (known after apply)
  �[32m+�[0m�[0m organization-id         = "178cb46b-d78e-435d-8b6e-d8d023a08e6f"
  �[32m+�[0m�[0m region                  = "us-east-2"
  �[32m+�[0m�[0m registry_key            = (known after apply)
  �[32m+�[0m�[0m registry_secret         = (known after apply)
  �[32m+�[0m�[0m registry_url            = (known after apply)

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot force-pushed the renovate/confluent-2.x branch from 99a6598 to 6921cb9 Compare February 27, 2025 05:01
@github-actions
Copy link

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_flink_region.main_flink_region: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main_flink_region: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version           = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint      = (known after apply)
      �[32m+�[0m�[0m cloud                 = (known after apply)
      �[32m+�[0m�[0m display_name          = (known after apply)
      �[32m+�[0m�[0m id                    = (known after apply)
      �[32m+�[0m�[0m kind                  = (known after apply)
      �[32m+�[0m�[0m package               = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint = (known after apply)
      �[32m+�[0m�[0m region                = (known after apply)
      �[32m+�[0m�[0m resource_name         = (known after apply)
      �[32m+�[0m�[0m rest_endpoint         = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.flink_developer_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "flink_developer_api_key" {
      �[32m+�[0m�[0m description            = "Flink Developer API Key that is owned by 'flink_developer' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "flink_developer_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = "fcpm/v2"
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_developer_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_developer_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_developer' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_developer_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.sr_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "sr_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "SR API Key that is owned by 'sr_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "sr_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ADVANCED"
        }
    }

�[1m  # confluent_flink_compute_pool.main_flink_pool�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "main_flink_pool" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "main_flink_pool"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 5
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }

      �[32m+�[0m�[0m standard {}
    }

�[1m  # confluent_role_binding.fd_flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_flink_developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkDeveloper"
    }

�[1m  # confluent_role_binding.fd_kafka_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_kafka_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.fd_schema_registry_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_schema_registry_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_developer_read_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_read_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.kafka_developer_write_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_write_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_manager_kafka_cluster_admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_manager_kafka_cluster_admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.sr_manager_data_steward�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "sr_manager_data_steward" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DataSteward"
    }

�[1m  # confluent_service_account.flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "flink_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for flink developer"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-flink_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for developer using Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.sr_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "sr_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-sr_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1mPlan:�[0m 20 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m cloud                   = "AWS"
  �[32m+�[0m�[0m compute-pool-id         = (known after apply)
  �[32m+�[0m�[0m environment-id          = (known after apply)
  �[32m+�[0m�[0m flink-api-key           = (known after apply)
  �[32m+�[0m�[0m flink-api-secret        = (known after apply)
  �[32m+�[0m�[0m kafka_bootstrap_servers = (known after apply)
  �[32m+�[0m�[0m kafka_sasl_jaas_config  = (known after apply)
  �[32m+�[0m�[0m organization-id         = "178cb46b-d78e-435d-8b6e-d8d023a08e6f"
  �[32m+�[0m�[0m region                  = "us-east-2"
  �[32m+�[0m�[0m registry_key            = (known after apply)
  �[32m+�[0m�[0m registry_secret         = (known after apply)
  �[32m+�[0m�[0m registry_url            = (known after apply)
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@gAmUssA gAmUssA merged commit 449b25b into main Feb 27, 2025
5 checks passed
@gAmUssA gAmUssA deleted the renovate/confluent-2.x branch February 27, 2025 05:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants