Skip to content

Commit 7e10e9f

Browse files
fourpointfoursiddharth2411Shubhamasharma-ybddhodge
authored
[WIP][docs] Documentation for logical replication with PG connector (yugabyte#23065)
* initial commit for logical replication docs * title changes * changes to view table * fixed line break * fixed line break * added content for delete and update * added more content * replaced hyperlink todos with reminders * added snapshot metrics * added more content * added more config properties to docs * added more config properties to docs * added more config properties to docs * replaced postgresql instances with yugabytedb * added properties * added complete properties * changed postgresql to yugabytedb * added example for all record types * fixed highlighting of table header * added type representations * added type representations * full content in now; * full content in now; * changed postgres references appropriately * added a missing keyword * changed name * self review comments * self review comments * added section for logical replication * added section for logical replication * modified content for monitor page * added content for monitoring * rebased to master; * CDC logical replication overview (#3) Co-authored-by: Vaibhav Kushwaha <34186745+vaibhav-yb@users.noreply.github.com> * advanced-topic (#5) Co-authored-by: Vaibhav Kushwaha <34186745+vaibhav-yb@users.noreply.github.com> * removed references to incremental and ad-hoc snapshots * replaced index page with an empty one * addressed review comments * added getting started section * added section for get started * self review comments * self review comments * group review comments * added hstore and domain type docs * Advance configurations for CDC using logical replication (#2) * Fix overview section (#7) * Monitor section (#4) Co-authored-by: Vaibhav Kushwaha <34186745+vaibhav-yb@users.noreply.github.com> * Initial Snapshot content (#6) * Add getting started (#1) * Fix for broken note (#9) * Fix the issue yaml parsing Summary: Fixes the issue yaml parsing. We changed the formatting for yaml list. This diff fixes the usage for the same. Test Plan: Prepared alma9 node using ynp. Verified universe creation. Reviewers: vbansal, asharma Reviewed By: asharma Subscribers: yugaware Differential Revision: https://phorge.dev.yugabyte.com/D36711 * [PLAT-14534]Add regex match for GCP Instance template Summary: Added regex match for gcp instance template. Regex taken from gcp documentation [[https://cloud.google.com/compute/docs/reference/rest/v1/instanceTemplates | here]]. Test Plan: Tested manually that validation fails with invalid characters. Reviewers: #yba-api-review!, svarshney Reviewed By: svarshney Subscribers: yugaware Differential Revision: https://phorge.dev.yugabyte.com/D36543 * update diagram (yugabyte#23245) * [/PLAT-14708] Fix JSON field name in TaskInfo query Summary: This was missed when task params were moved out from details field. Test Plan: Trivial - existing tests should succeed. Reviewers: vbansal, cwang Reviewed By: vbansal Subscribers: yugaware Differential Revision: https://phorge.dev.yugabyte.com/D36705 * [yugabyte#23173] DocDB: Allow large bytes to be passed to RateLimiter Summary: RateLimiter has a debug assert that you cannot `Request` more than `GetSingleBurstBytes`. In release mode we do not perform this check and any call gets stuck forever. This change allows large bytes to be requested on RateLimiter. It does so by breaking requests larger than `GetSingleBurstBytes` into multiple smaller requests. This change is a temporary fix to allow xCluster to operate without any issues. RocksDB RateLimiter has multiple enhancements over the years that would help avoid this and more starvation issues. Ex: facebook/rocksdb@cb2476a. We should consider pulling in those changes. Fixes yugabyte#23173 Jira: DB-12112 Test Plan: RateLimiterTest.LargeRequests Reviewers: slingam Reviewed By: slingam Subscribers: ybase Differential Revision: https://phorge.dev.yugabyte.com/D36703 * [yugabyte#23179] CDCSDK: Support data types with dynamically alloted oids in CDC Summary: This diff adds support for data types with dynamically alloted oids in CDC (for ex: hstore, enum array, etc). Such types contain invalid pg_type_oid for the corresponding columns in docdb schema. In the current implemtation, in `ybc_pggate`, while decoding the cdc records we look at the `type_map_` to obtain YBCPgTypeEntity, which is then used for decoding. However the `type_map_` does not contain any entries for the data types with dynamically alloted oids. As a result, this causes segmentation fault. To prevent such crashes, CDC prevents addition of tables with such columns to the stream. This diff removes the filtering logic and adds the tables to the stream even if it has such a type column. A function pointer will now be passed to `YBCPgGetCDCConsistentChanges`, which takes attribute number and the table_oid and returns the appropriate type entity by querying the `pg_type` catalog table. While decoding if a column is encountered with invalid pg_type_oid then, the passed function is invoked and type entity is obtained for decoding. **Upgrade/Rollback safety:** This diff adds a field `optional int32 attr_num` to DatumMessagePB. These changes are protected by the autoflag `ysql_yb_enable_replication_slot_consumption` which already exists but has not yet been released. Jira: DB-12118 Test Plan: Jenkins: urgent All the existing cdc tests ./yb_build.sh --java-test 'org.yb.pgsql.TestPgReplicationSlot#replicationConnectionConsumptionAllDataTypesWithYbOutput' Reviewers: skumar, stiwary, asrinivasan, dmitry Reviewed By: stiwary, dmitry Subscribers: steve.varnau, skarri, yql, ybase, ycdcxcluster Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D36689 * [PLAT-14710] Do not return apiToken in response to getSessionInfo Summary: **Context** The GET /session_info YBA API returns: { "authToken": "…", "apiToken": "….", "apiTokenVersion": "….", "customerUUID": "uuid1", "userUUID": "useruuid1" } The apiToken and apiTokenVersion is supposed to be the last generated token that is valid. We had the following sequence of changes to this API. https://yugabyte.atlassian.net/browse/PLAT-8028 - Do not store YBA token in YBA. After the above fix, YBA does not store the apiToken anymore. So it cannot return it as part of the /session_info. The change for this ticket returned the hashed apiToken instead. https://yugabyte.atlassian.net/browse/PLAT-14672 - getSessionInfo should generate and return api key in response Since the hashed apiToken value is not useful to any client, and it broke YBM create cluster (https://yugabyte.atlassian.net/browse/CLOUDGA-22117), the first change for this ticket returned a new apiToken instead. Note that GET /session_info is meant to get customer and user information for the currently authenticated session. This is useful for automation starting off an authenticated session from an existing/cached API token. It is not necessary for the /session_info API to return the authToken and apiToken. The client already has one of authToken or apiToken with which it invoked /session_info API. In fact generating a new apiToken whenever /session_info is called will invalidate the previous apiToken which would not be expected by the client. There is a different API /api_token to regenerate the apiToken explicitly. **Fix in this change** So the right behaviour is for /session_info to stop sending the apiToken in the response. In fact, the current behaviour of generating a new apiToken everytime will break a client (for example node-agent usage of /session_info here (https://github.com/yugabyte/yugabyte-db/blob/4ca56cfe27d1cae64e0e61a1bde22406e003ec04/managed/node-agent/app/server/handler.go#L19). **Client impact of not returning apiToken in response of /session_info** This should not impact any normal client that was using /session_info only to get the user uuid and customer uuid. However, there might be a few clients (like YBM for example) that invoked /session_info to get the last generated apiToken from YBA. Unfortunately, this was a mis-use of this API. YBA generates the apiToken in response to a few entry point APIs like /register, /api_login and /api_token. The apiToken is long lived. YBA could choose to expire these apiTokens after a fixed amount of (long) time, but for now there is no expiration. The clients are expected to store the apiToken at their end and use the token to reestablish a session with YBA whenever needed. After establishinig a new session, clients would call GET /session_info to get the user uuid and customer uuid. This is getting fixed in YBM with https://yugabyte.atlassian.net/browse/CLOUDGA-22117. So this PLAT change should be taken up by YBM only after CLOUDGA-22117 is fixed. Test Plan: * Manually verified that session_info does not return authToken * Shubham verified that node-agent works with this fix. Thanks Shubham! Reviewers: svarshney, dkumar, tbedi, #yba-api-review! Reviewed By: svarshney Subscribers: yugaware Differential Revision: https://phorge.dev.yugabyte.com/D36712 * [docs] updates to CVE table status column (yugabyte#23225) * updates to status column * review comment * format --------- Co-authored-by: Dwight Hodge <ghodge@yugabyte.com> * [docs] Fix load balance keyword in drivers page (yugabyte#23253) [docs] Fix `load_balance` -> `load-balance` in jdbc driver [docs] Fix `load_balance` -> `loadBalance` in nodejs driver * fixed compilation * fix link, format * format, links * links, format * format * format * minor edit * best practice (#8) * moved sections * moved pages * added key concepts page * added link to getting started * Dynamic table doc changes (#11) * icons * added box for lead link * revert ybclient change * revert accidental change * revert accidental change * revert accidental change * fix link block for getting started page * format * minor edit * links, format * format * links * format * remove reminder references * Modified output plugin docs (yugabyte#12) * Naming edits * format * review comments * diagram * review comment * fix links * format * format * link * review comments * copy to stable * link --------- Co-authored-by: siddharth2411 <43139012+siddharth2411@users.noreply.github.com> Co-authored-by: Shubham <svarshney@yugabyte.com> Co-authored-by: asharma-yb <asharma@yugabyte.com> Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> Co-authored-by: Naorem Khogendro Singh <nsingh@yugabyte.com> Co-authored-by: Hari Krishna Sunder <hari90@users.noreply.github.com> Co-authored-by: Sumukh-Phalgaonkar <sumukhphalgaonkar@gmail.com> Co-authored-by: Subramanian Neelakantan <sneelakantan@yugabyte.com> Co-authored-by: Aishwarya Chakravarthy <ashchakravarthy@gmail.com> Co-authored-by: Dwight Hodge <ghodge@yugabyte.com> Co-authored-by: ddorian <dorian.hoxha@gmail.com> Co-authored-by: Sumukh-Phalgaonkar <61342752+Sumukh-Phalgaonkar@users.noreply.github.com>
1 parent e166cee commit 7e10e9f

File tree

46 files changed

+6186
-410
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+6186
-410
lines changed

.github/vale-styles/Yugabyte/spelling-exceptions.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -445,6 +445,7 @@ Patroni
445445
performant
446446
PgBouncer
447447
pgLoader
448+
pg_recvlogical
448449
Phabricator
449450
phaser
450451
phasers

docs/content/preview/architecture/docdb-replication/cdc-logical-replication.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ type: docs
1717

1818
Change data capture (CDC) in YugabyteDB provides technology to ensure that any changes in data due to operations such as inserts, updates, and deletions are identified, captured, and made available for consumption by applications and other tools.
1919

20-
CDC in YugabyteDB is based on the PostgreSQL Logical Replication model. The fundamental concept here is that of the Replication Slot. A Replication Slot represents a stream of changes that can be replayed to the client in the order they were made on the origin server in a manner that preserves transactional consistency. This is the basis for the support for Transactional CDC in YugabyteDB. Where the strict requirements of Transactional CDC are not present, multiple replication slots can be used to stream changes from unrelated tables in parallel.
20+
CDC in YugabyteDB is based on the PostgreSQL Logical Replication model. The fundamental concept is that of the Replication Slot. A Replication Slot represents a stream of changes that can be replayed to the client in the order they were made on the origin server in a manner that preserves transactional consistency. This is the basis for the support for Transactional CDC in YugabyteDB. Where the strict requirements of Transactional CDC are not present, multiple replication slots can be used to stream changes from unrelated tables in parallel.
2121

2222
## Architecture
2323

@@ -35,7 +35,7 @@ The following are the main components of the Yugabyte CDC solution:
3535

3636
Logical replication starts by copying a snapshot of the data on the publisher database. After that is done, changes on the publisher are streamed to the server as they occur in near real time.
3737

38-
To setup Logical Replication, an application will first have to create a replication slot. When a replication slot is created, a boundary is established between the snapshot data and the streaming changes. This boundary or `consistent_point` is a consistent state of the source database. It corresponds to a commit time (HybridTime value). Data from transactions with commit time <= commit time corresponding to the `consistent_point` are consumed as part of the initial snapshot. Changes from transactions with commit time greater than the commit time of the `consistent_point` are consumed in the streaming phase in transaction commit time order.
38+
To set up Logical Replication, an application will first have to create a replication slot. When a replication slot is created, a boundary is established between the snapshot data and the streaming changes. This boundary or `consistent_point` is a consistent state of the source database. It corresponds to a commit time (HybridTime value). Data from transactions with commit time <= commit time corresponding to the `consistent_point` are consumed as part of the initial snapshot. Changes from transactions with commit time greater than the commit time of the `consistent_point` are consumed in the streaming phase in transaction commit time order.
3939

4040
#### Initial Snapshot
4141

docs/content/preview/architecture/docdb-replication/change-data-capture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Each tablet has its own WAL file. WAL is NOT in-memory, but it is disk persisted
3333

3434
YugabyteDB normally purges WAL segments after some period of time. This means that the connector does not have the complete history of all changes that have been made to the database. Therefore, when the connector first connects to a particular YugabyteDB database, it starts by performing a consistent snapshot of each of the database schemas.
3535

36-
The Debezium YugabyteDB connector captures row-level changes in the schemas of a YugabyteDB database. The first time it connects to a YugabyteDB cluster, the connector takes a consistent snapshot of all schemas. After that snapshot is complete, the connector continuously captures row-level changes that insert, update, and delete database content, and that were committed to a YugabyteDB database.
36+
The YugabyteDB Debezium connector captures row-level changes in the schemas of a YugabyteDB database. The first time it connects to a YugabyteDB cluster, the connector takes a consistent snapshot of all schemas. After that snapshot is complete, the connector continuously captures row-level changes that insert, update, and delete database content, and that were committed to a YugabyteDB database.
3737

3838
![How does CDC work](/images/explore/cdc-overview-work.png)
3939

docs/content/preview/explore/change-data-capture/_index.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ headerTitle: Change data capture (CDC)
44
linkTitle: Change data capture
55
description: CDC or Change data capture is a process to capture changes made to data in the database.
66
headcontent: Capture changes made to data in the database
7-
image: /images/section_icons/index/develop.png
7+
image: fa-light fa-rotate
88
cascade:
99
earlyAccess: /preview/releases/versioning/#feature-maturity
1010
menu:
@@ -26,23 +26,19 @@ In databases, change data capture (CDC) is a set of software design patterns use
2626

2727
YugabyteDB supports the following methods for reading change events.
2828

29-
## PostgreSQL Logical Replication Protocol (Recommended)
29+
## PostgreSQL Replication Protocol
3030

31-
This method uses the PostgreSQL replication protocol, ensuring compatibility with PostgreSQL CDC systems. Logical replication operates through a publish-subscribe model. It replicates data objects and their changes based on the replication identity.
31+
This method uses the [PostgreSQL replication protocol](using-logical-replication/key-concepts/#replication-protocols), ensuring compatibility with PostgreSQL CDC systems. Logical replication operates through a publish-subscribe model. It replicates data objects and their changes based on the replication identity.
3232

3333
It works as follows:
3434

3535
1. Create Publications in the YugabyteDB cluster similar to PostgreSQL.
3636
1. Deploy the YugabyteDB Connector in your preferred Kafka Connect environment.
3737
1. The connector uses replication slots to capture change events and publishes them directly to a Kafka topic.
3838

39-
This is the recommended approach for most CDC applications due to its compatibility with PostgreSQL.
40-
41-
<!--
4239
{{<lead link="./using-logical-replication/">}}
43-
To learn about PostgreSQL Logical Replication, see [Using PostgreSQL Logical Replication](./debezium-connector-yugabytedb/).
40+
To learn about CDC in YugabyteDB using the PostgreSQL Replication Protocol, see [CDC using PostgreSQL Replication Protocol](./using-logical-replication).
4441
{{</lead>}}
45-
-->
4642

4743
## YugabyteDB gRPC Replication Protocol
4844

@@ -55,5 +51,5 @@ It works as follows:
5551
1. The connector captures change events using YugabyteDB's native gRPC replication and directly publishes them to a Kafka topic.
5652

5753
{{<lead link="./using-yugabytedb-grpc-replication/">}}
58-
To learn about gRPC Replication, see [Using YugabyteDB gRPC Replication](./using-yugabytedb-grpc-replication/).
54+
To learn about CDC in YugabyteDB using the gRPC Replication Protocol, see [CDC using gRPC Replication Protocol](./using-yugabytedb-grpc-replication/).
5955
{{</lead>}}
Lines changed: 107 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,9 @@
1-
<!---
2-
title: Using logical replication
3-
headerTitle: Using logical replication
4-
linkTitle: Using logical replication
5-
description: CDC or Change data capture is a process to capture changes made to data in the database.
1+
---
2+
title: CDC using PostgreSQL replication protocol
3+
headerTitle: CDC using PostgreSQL replication protocol
4+
linkTitle: PostgreSQL protocol
5+
description: CDC using YugabyteDB PostgreSQL replication protocol.
66
headcontent: Capture changes made to data in the database
7-
image: /images/section_icons/index/develop.png
87
cascade:
98
earlyAccess: /preview/releases/versioning/#feature-maturity
109
menu:
@@ -13,5 +12,105 @@ menu:
1312
parent: explore-change-data-capture
1413
weight: 240
1514
type: indexpage
16-
private: true
17-
--->
15+
showRightNav: true
16+
---
17+
18+
## Overview
19+
20+
YugabyteDB CDC captures changes made to data in the database and streams those changes to external processes, applications, or other databases. CDC allows you to track and propagate changes in a YugabyteDB database to downstream consumers based on its Write-Ahead Log (WAL). YugabyteDB CDC captures row-level changes resulting from INSERT, UPDATE, and DELETE operations in the configured database and publishes it further to be consumed by downstream applications.
21+
22+
### Highlights
23+
24+
#### Resilience
25+
26+
YugabyteDB CDC with PostgreSQL Logical Replication provides resilience as follows:
27+
28+
1. Following a failure of the application, server, or network, the replication can continue from any of the available server nodes.
29+
30+
2. Replication continues from the transaction immediately after the transaction that was last acknowledged by the application. No transactions are missed by the application.
31+
32+
#### Security
33+
34+
Because YugabyteDB is using the PostgreSQL Logical Replication model, the following applies:
35+
36+
- The CDC user persona will be a PostgreSQL replication client.
37+
38+
- A standard replication connection is used for consumption, and all the server-side configurations for authentication, authorizations, SSL modes, and connection load balancing can be leveraged automatically.
39+
40+
#### Guarantees
41+
42+
CDC in YugabyteDB provides the following guarantees.
43+
44+
| GUARANTEE | DESCRIPTION |
45+
| :----- | :----- |
46+
| Per-slot ordered delivery guarantee | Changes from transactions from all the tables that are part of the replication slot's publication are received in the order they were committed. This also implies ordered delivery across all the tablets that are part of the publication's table list. |
47+
| At least once delivery | Changes from transactions are streamed at least once. Changes from transactions may be streamed again in case of restart after failure. For example, this can happen in the case of a Kafka Connect node failure. If the Kafka Connect node pushes the records to Kafka and crashes before committing the offset, it will again get the same set of records upon restart. |
48+
| No gaps in change stream | Receiving changes that are part of a transaction with commit time *t* implies that you have already received changes from all transactions with commit time lower than *t*. Thus, receiving any change for a row with commit timestamp *t* implies that you have received all older changes for that row. |
49+
50+
## Key concepts
51+
52+
The YugabyteDB logical replication feature makes use of PostgreSQL concepts like replication slot, publication, replica identity, and so on. Understanding these key concepts is crucial for setting up and managing a logical replication environment effectively.
53+
54+
{{<lead link="./key-concepts">}}
55+
To know more about the key concepts of YugabyteDB CDC with logical replication, see [Key concepts](./key-concepts).
56+
{{</lead>}}
57+
58+
## Getting started
59+
60+
Get started with YugabyteDB logical replication using the YugabyteDB Connector.
61+
62+
{{<lead link="./get-started">}}
63+
64+
To learn how get started with the connector, see [Get started](./get-started).
65+
66+
{{</lead>}}
67+
68+
## Monitoring
69+
70+
You can monitor the activities and status of the deployed connectors using the http end points provided by YugabyteDB.
71+
72+
{{<lead link="./monitor">}}
73+
To know more about how to monitor your CDC setup, see [Monitor](./monitor/).
74+
{{</lead>}}
75+
76+
## YugabyteDB Connector
77+
78+
To capture and stream your changes in YugabyteDB to an external system, you need a connector that can read the changes in YugabyteDB and stream it out. For this, you can use the YugabyteDB Connector, which is based on the Debezium platform. The connector is deployed as a set of Kafka Connect-compatible connectors, so you first need to define a YugabyteDB connector configuration and then start the connector by adding it to Kafka Connect.
79+
80+
{{<lead link="./yugabytedb-connector/">}}
81+
To understand how the various features and configuration of the connector, see [YugabyteDB Connector](./yugabytedb-connector/).
82+
{{</lead>}}
83+
84+
## Limitations
85+
86+
- LSN Comparisons Across Slots.
87+
88+
In the case of YugabyteDB, the LSN  does not represent the byte offset of a WAL record. Hence, arithmetic on LSN and any other usages of the LSN making this assumption will not work. Also, currently, comparison of LSN values from messages coming from different replication slots is not supported.
89+
90+
- The following functions are currently unsupported:
91+
92+
- `pg_current_wal_lsn`
93+
- `pg_wal_lsn_diff`
94+
- `IDENTIFY SYSTEM`
95+
- `txid_current`
96+
- `pg_stat_replication`
97+
98+
Additionally, the functions responsible for pulling changes instead of the server streaming it are unsupported as well. They are described in [Replication Functions](https://www.postgresql.org/docs/11/functions-admin.html#FUNCTIONS-REPLICATION) in the PostgreSQL documentation.
99+
100+
- Restriction on DDLs
101+
102+
DDL operations should not be performed from the time of replication slot creation till the start of snapshot consumption of the last table.
103+
104+
- There should be a primary key on the table you want to stream the changes from.
105+
106+
- CDC is not supported on a target table for xCluster replication [11829](https://github.com/yugabyte/yugabyte-db/issues/11829).
107+
108+
- Currently we don't support schema evolution for changes that require table rewrites (ex: ALTER TYPE).
109+
110+
- YCQL tables aren't currently supported. Issue [11320](https://github.com/yugabyte/yugabyte-db/issues/11320).
111+
112+
- Support for point-in-time recovery (PITR) is tracked in issue [10938](https://github.com/yugabyte/yugabyte-db/issues/10938).
113+
114+
- Support for transaction savepoints is tracked in issue [10936](https://github.com/yugabyte/yugabyte-db/issues/10936).
115+
116+
- Support for enabling CDC on Read Replicas is tracked in issue [11116](https://github.com/yugabyte/yugabyte-db/issues/11116).
Lines changed: 29 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,36 @@
11
---
2-
title: Advanced Configurations
3-
headerTitle: Advanced Configurations
4-
linkTitle: Advanced Configurations
5-
description: Advanced Configurations for Change Data Capture in YugabyteDB.
2+
title: Advanced configurations for CDC using Logical Replication
3+
headerTitle: Advanced configuration
4+
linkTitle: Advanced configuration
5+
description: Advanced Configurations for Logical Replication.
6+
headcontent: Tune your CDC configuration
67
menu:
78
preview:
89
parent: explore-change-data-capture-logical-replication
910
identifier: advanced-configurations
1011
weight: 40
1112
type: docs
12-
---
13+
---
14+
15+
## YB-TServer flags
16+
17+
You can use the following [YB-TServer flags](../../../../reference/configuration/yb-tserver/) to tune logical replication deployment configuration:
18+
19+
- [ysql_yb_default_replica_identity](../../../../reference/configuration/yb-tserver/#ysql-yb-default-replica-identity)
20+
- [cdcsdk_enable_dynamic_table_support](../../../../reference/configuration/yb-tserver/#cdcsdk-enable-dynamic-table-support)
21+
- [cdcsdk_publication_list_refresh_interval_secs](../../../../reference/configuration/yb-tserver/#cdcsdk-publication-list-refresh-interval-secs)
22+
- [cdcsdk_max_consistent_records](../../../../reference/configuration/yb-tserver/#cdcsdk-max-consistent-records)
23+
- [cdcsdk_vwal_getchanges_resp_max_size_bytes](../../../../reference/configuration/yb-tserver/#cdcsdk-vwal-getchanges-resp-max-size-bytes)
24+
25+
## Retention of resources
26+
27+
CDC retains resources (such as WAL segments) that contain information related to the changes involved in the transactions. These resources are typically retained until the consuming client acknowledges the receipt of all the transactions contained in that resource.
28+
29+
Retaining resources has an impact on the system. Clients are expected to consume these transactions within configurable duration limits. Resources will be released if the duration exceeds these configured limits.
30+
31+
Use the following flags to control the duration for which resources are retained:
32+
33+
- [cdc_wal_retention_secs](../../../../reference/configuration/yb-tserver/#cdc-wal-retention-secs)
34+
- [cdc_intent_retention_ms](../../../../reference/configuration/yb-tserver/#cdc-intent-retention-ms)
35+
36+
Resources are retained for each tablet of a table that is part of a database whose changes are being consumed using a replication slot. This includes those tables that may not be currently part of the publication specification.

0 commit comments

Comments
 (0)