Skip to content

Commit 915d121

Browse files
Merge pull request #290 from j3-signalroom/244-update-the-subsection-on-the-new-kafka-results-publishing-along-with-example-output
Resolved #244.
2 parents 46f4eb9 + cfec2c9 commit 915d121

File tree

6 files changed

+5
-1
lines changed

6 files changed

+5
-1
lines changed
393 KB
Loading
463 KB
Loading

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ The format is base on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
88
### Changed
99
- Issue [#242](https://github.com/j3-signalroom/kafka_cluster-topics-partition_count_recommender-tool/issues/242)
1010
- Issue [#243](https://github.com/j3-signalroom/kafka_cluster-topics-partition_count_recommender-tool/issues/243)
11+
- Issue [#244](https://github.com/j3-signalroom/kafka_cluster-topics-partition_count_recommender-tool/issues/244)
1112
- Issue [#283](https://github.com/j3-signalroom/kafka_cluster-topics-partition_count_recommender-tool/issues/283)
1213

1314
### Fixed

CHANGELOG.pdf

417 Bytes
Binary file not shown.

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -654,7 +654,10 @@ The tool automatically generates two comprehensive CSV reports for each Kafka Cl
654654
655655
#### **1.4.2 Detail Results Produced to Kafka**
656656
657-
<TO BE ADDED IN FUTURE RELEASE>
657+
If you enable the Kafka Writer by setting the `ENABLE_KAFKA_WRITER` environment variable to True and `KAFKA_WRITER_TOPIC_NAME` to a valid Kafka topic (e.g., `_j3.partition_recommender.results`), the tool will send detailed analysis results to a specified Kafka topic within each Kafka cluster being analyzed. This feature supports real-time monitoring and integration with other Kafka-based systems. Below are screenshots of the key and value schemas for the Kafka topic `_j3.partition_recommender.results`:
658+
659+
![__j3.partition_recommender.results-key](.blog/images/__j3.partition_recommender.results-key.png)
660+
![__j3.partition_recommender.results-value](.blog/images/__j3.partition_recommender.results-value.png)
658661
659662
## **2.0 How the tool calculates the recommended partition count**
660663
The tool uses the Kafka `AdminClient` to retrieve all Kafka Topics (based on the `TOPIC_FILTER` specified) stored in your Kafka Cluster, including the original partition count per topic. Then, it iterates through each Kafka Topic, calling the Confluent Cloud Metrics RESTful API to retrieve the topic’s average (i.e., the _Consumer Throughput_) and peak consumption in bytes over a rolling seven-day period. Next, it calculates the required throughput by multiplying the peak consumption by the `REQUIRED_CONSUMPTION_THROUGHPUT_FACTOR` (i.e., the _Required Throughput_). Finally, it divides the required throughput by the consumer throughput and rounds the result to the nearest whole number to determine the optimal number of partitions.

README.pdf

490 KB
Binary file not shown.

0 commit comments

Comments
 (0)