Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Mar 1, 2025

This PR contains the following updates:

Package Type Update Change
confluent (source) required_provider minor 2.18.0 -> 2.19.0

Release Notes

confluentinc/terraform-provider-confluent (confluent)

v2.19.0

Compare Source

Full Changelog

New features:

  • Updated the docs and the error message for the Resource Importer tool.
  • Added additional cluster_link_id attribute for confluent_cluster_link resource.
  • Added confluent_cluster_link data-source.
  • Added catalog_http_endpoint for Stream Catalog API resources.

Bug fixes:

  • Fixed "rest_endpoint is nil or empty for Schema Registry Cluster" error in confluent_api_key resource.

Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link

github-actions bot commented Mar 1, 2025

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Terraform Plan 📖success

Show Plan
�[0m�[1mdata.confluent_flink_region.main_flink_region: Reading...�[0m�[0m
�[0m�[1mdata.confluent_flink_region.main_flink_region: Read complete after 0s [id=aws.us-east-2]�[0m

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  �[32m+�[0m create�[0m
 �[36m<=�[0m read (data resources)�[0m

Terraform will perform the following actions:

�[1m  # data.confluent_schema_registry_cluster.advanced�[0m will be read during apply
  # (config refers to values not yet known)
�[0m �[36m<=�[0m�[0m data "confluent_schema_registry_cluster" "advanced" {
      �[32m+�[0m�[0m api_version           = (known after apply)
      �[32m+�[0m�[0m catalog_endpoint      = (known after apply)
      �[32m+�[0m�[0m cloud                 = (known after apply)
      �[32m+�[0m�[0m display_name          = (known after apply)
      �[32m+�[0m�[0m id                    = (known after apply)
      �[32m+�[0m�[0m kind                  = (known after apply)
      �[32m+�[0m�[0m package               = (known after apply)
      �[32m+�[0m�[0m private_rest_endpoint = (known after apply)
      �[32m+�[0m�[0m region                = (known after apply)
      �[32m+�[0m�[0m resource_name         = (known after apply)
      �[32m+�[0m�[0m rest_endpoint         = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_api_key.flink_developer_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "flink_developer_api_key" {
      �[32m+�[0m�[0m description            = "Flink Developer API Key that is owned by 'flink_developer' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "flink_developer_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = "fcpm/v2"
          �[32m+�[0m�[0m id          = "aws.us-east-2"
          �[32m+�[0m�[0m kind        = "Region"

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_developer_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_developer_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_developer' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_developer_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.kafka_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "kafka_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "Kafka API Key that is owned by 'kafka_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "kafka_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_api_key.sr_manager_kafka_api_key�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_api_key" "sr_manager_kafka_api_key" {
      �[32m+�[0m�[0m description            = "SR API Key that is owned by 'sr_manager' service account"
      �[32m+�[0m�[0m disable_wait_for_ready = false
      �[32m+�[0m�[0m display_name           = "sr_manager_kafka_api_key"
      �[32m+�[0m�[0m id                     = (known after apply)
      �[32m+�[0m�[0m secret                 = (sensitive value)

      �[32m+�[0m�[0m managed_resource {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)

          �[32m+�[0m�[0m environment {
              �[32m+�[0m�[0m id = (known after apply)
            }
        }

      �[32m+�[0m�[0m owner {
          �[32m+�[0m�[0m api_version = (known after apply)
          �[32m+�[0m�[0m id          = (known after apply)
          �[32m+�[0m�[0m kind        = (known after apply)
        }
    }

�[1m  # confluent_environment.cc_env�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_environment" "cc_env" {
      �[32m+�[0m�[0m display_name  = "java_flink_workshop"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m stream_governance {
          �[32m+�[0m�[0m package = "ADVANCED"
        }
    }

�[1m  # confluent_flink_compute_pool.main_flink_pool�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_flink_compute_pool" "main_flink_pool" {
      �[32m+�[0m�[0m api_version   = (known after apply)
      �[32m+�[0m�[0m cloud         = "AWS"
      �[32m+�[0m�[0m display_name  = "main_flink_pool"
      �[32m+�[0m�[0m id            = (known after apply)
      �[32m+�[0m�[0m kind          = (known after apply)
      �[32m+�[0m�[0m max_cfu       = 5
      �[32m+�[0m�[0m region        = "us-east-2"
      �[32m+�[0m�[0m resource_name = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }
    }

�[1m  # confluent_kafka_cluster.kafka_cluster�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_kafka_cluster" "kafka_cluster" {
      �[32m+�[0m�[0m api_version        = (known after apply)
      �[32m+�[0m�[0m availability       = "SINGLE_ZONE"
      �[32m+�[0m�[0m bootstrap_endpoint = (known after apply)
      �[32m+�[0m�[0m cloud              = "AWS"
      �[32m+�[0m�[0m display_name       = "workshop"
      �[32m+�[0m�[0m id                 = (known after apply)
      �[32m+�[0m�[0m kind               = (known after apply)
      �[32m+�[0m�[0m rbac_crn           = (known after apply)
      �[32m+�[0m�[0m region             = "us-east-2"
      �[32m+�[0m�[0m rest_endpoint      = (known after apply)

      �[32m+�[0m�[0m environment {
          �[32m+�[0m�[0m id = (known after apply)
        }

      �[32m+�[0m�[0m standard {}
    }

�[1m  # confluent_role_binding.fd_flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_flink_developer" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "FlinkDeveloper"
    }

�[1m  # confluent_role_binding.fd_kafka_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_kafka_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_kafka_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.fd_schema_registry_read�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_read" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.fd_schema_registry_write�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "fd_schema_registry_write" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_developer_read_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_read_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperRead"
    }

�[1m  # confluent_role_binding.kafka_developer_write_all_topics�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_developer_write_all_topics" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DeveloperWrite"
    }

�[1m  # confluent_role_binding.kafka_manager_kafka_cluster_admin�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "kafka_manager_kafka_cluster_admin" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "CloudClusterAdmin"
    }

�[1m  # confluent_role_binding.sr_manager_data_steward�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_role_binding" "sr_manager_data_steward" {
      �[32m+�[0m�[0m crn_pattern = (known after apply)
      �[32m+�[0m�[0m id          = (known after apply)
      �[32m+�[0m�[0m principal   = (known after apply)
      �[32m+�[0m�[0m role_name   = "DataSteward"
    }

�[1m  # confluent_service_account.flink_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "flink_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for flink developer"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-flink_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_developer�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_developer" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account for developer using Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_developer"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.kafka_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "kafka_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-kafka_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1m  # confluent_service_account.sr_manager�[0m will be created
�[0m  �[32m+�[0m�[0m resource "confluent_service_account" "sr_manager" {
      �[32m+�[0m�[0m api_version  = (known after apply)
      �[32m+�[0m�[0m description  = "Service account to manage Kafka cluster"
      �[32m+�[0m�[0m display_name = "java_flink_workshop-sr_manager"
      �[32m+�[0m�[0m id           = (known after apply)
      �[32m+�[0m�[0m kind         = (known after apply)
    }

�[1mPlan:�[0m 20 to add, 0 to change, 0 to destroy.
�[0m
Changes to Outputs:
  �[32m+�[0m�[0m cloud                   = "AWS"
  �[32m+�[0m�[0m compute-pool-id         = (known after apply)
  �[32m+�[0m�[0m environment-id          = (known after apply)
  �[32m+�[0m�[0m flink-api-key           = (known after apply)
  �[32m+�[0m�[0m flink-api-secret        = (known after apply)
  �[32m+�[0m�[0m kafka_bootstrap_servers = (known after apply)
  �[32m+�[0m�[0m kafka_sasl_jaas_config  = (known after apply)
  �[32m+�[0m�[0m organization-id         = "178cb46b-d78e-435d-8b6e-d8d023a08e6f"
  �[32m+�[0m�[0m region                  = "us-east-2"
  �[32m+�[0m�[0m registry_key            = (known after apply)
  �[32m+�[0m�[0m registry_secret         = (known after apply)
  �[32m+�[0m�[0m registry_url            = (known after apply)
�[90m
─────────────────────────────────────────────────────────────────────────────�[0m

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Pushed by: @renovate[bot], Action: pull_request

@renovate renovate bot merged commit f30adad into main Mar 1, 2025
5 checks passed
@renovate renovate bot deleted the renovate/confluent-2.x branch March 1, 2025 05:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant