1010![ JUnit] ( https://img.shields.io/badge/JUnit-5.x-blue )
1111![ JaCoCo] ( https://img.shields.io/badge/JaCoCo-0.8.x-blue )
1212
13+ ## Table of Contents
14+
15+ - [ Build Status] ( #build-status )
16+ - [ Features] ( #features )
17+ - [ Prerequisites] ( #prerequisites )
18+ - [ Configuration] ( #configuration )
19+ - [ Kafka Configuration] ( #kafka-configuration )
20+ - [ Component Control] ( #component-control )
21+ - [ Database] ( #database )
22+ - [ Benefits of Write/Read Replicas] ( #benefits-of-writeread-replicas )
23+ - [ When to Use Write/Read Replicas] ( #when-to-use-writeread-replicas )
24+ - [ Configuration] ( #configuration-1 )
25+ - [ How Datasource Routing Works] ( #how-datasource-routing-works )
26+ - [ Redis] ( #redis )
27+ - [ Clock Configuration] ( #clock-configuration )
28+ - [ API Documentation] ( #api-documentation )
29+ - [ Development] ( #development )
30+ - [ Running the Application] ( #running-the-application )
31+ - [ Testing] ( #testing )
32+ - [ API Documentation] ( #api-documentation-1 )
33+ - [ Getting Started] ( #getting-started )
34+ - [ External Dependencies] ( #external-dependencies )
35+ - [ Using Docker Compose] ( #using-docker-compose )
36+ - [ Service Details] ( #service-details )
37+ - [ Manual Service Management] ( #manual-service-management )
38+ - [ Cloning the Repository] ( #cloning-the-repository )
39+ - [ Building] ( #building )
40+ - [ Contributing] ( #contributing )
41+ - [ GitHub Actions Permissions] ( #github-actions-permissions )
42+ - [ Read Write Datasource Routing] ( #read-write-datasource-routing )
43+ - [ Project Structure] ( #project-structure )
44+ - [ Analysis and Decisions] ( #analysis-and-decisions )
45+ - [ Architecture Decision Records (ADRs)] ( #architecture-decision-records-adrs )
46+ - [ Technical Analysis] ( #technical-analysis )
47+ - [ License] ( #license )
48+
1349A Spring Boot application for tracking flight events.
1450
1551## Build Status
@@ -39,8 +75,85 @@ A Spring Boot application for tracking flight events.
3975
4076The application can be configured through ` application.yml ` . Key configurations include:
4177
78+ ### Kafka Configuration
79+
80+ The application uses Kafka for event streaming and real-time data processing. Here's the complete Kafka configuration:
81+
82+ ``` yaml
83+ spring :
84+ kafka :
85+ bootstrap-servers : localhost:9092
86+ consumer :
87+ group-id : flight-tracker-group
88+ auto-offset-reset : earliest
89+ key-deserializer : org.apache.kafka.common.serialization.StringDeserializer
90+ value-deserializer : org.springframework.kafka.support.serializer.JsonDeserializer
91+ properties :
92+ spring.json.trusted.packages : " dev.luismachadoreis.flighttracker.server.ping.application.dto"
93+ topic :
94+ flight-positions : flight-positions
95+ ping-created : ping-created
96+ ` ` `
97+
98+ #### Component Control
99+
100+ You can enable or disable various Kafka components and WebSocket notifications:
101+
102+ ` ` ` yaml
103+ app :
104+ flight-data :
105+ subscriber :
106+ enabled : true # Enable/disable flight data Kafka subscriber
107+ ping :
108+ subscriber :
109+ enabled : true # Enable/disable ping Kafka subscriber
110+ publisher :
111+ enabled : true # Enable/disable ping Kafka publisher
112+ websocket :
113+ enabled : true # Enable/disable WebSocket notifications
114+ ` ` `
115+ 
116+
117+ These settings allow you to:
118+ - Control Kafka message consumption for flight data
119+ - Control Kafka message consumption for ping events
120+ - Control Kafka message publishing for ping events
121+ - Enable/disable WebSocket real-time notifications
122+
42123### Database
43124
125+ The application supports a Write/Read replica pattern for database operations. This pattern separates read and write operations to different database instances, providing several benefits:
126+
127+ 
128+
129+ #### Benefits of Write/Read Replicas
130+
131+ 1. **Improved Read Performance**
132+ - Read operations are distributed across multiple replicas
133+ - Reduced load on the primary database
134+ - Better scalability for read-heavy workloads
135+
136+ 2. **High Availability**
137+ - If the primary database fails, read replicas can continue serving read requests
138+ - Automatic failover capabilities
139+ - Reduced downtime impact
140+
141+ 3. **Geographic Distribution**
142+ - Read replicas can be placed closer to users
143+ - Reduced latency for read operations
144+ - Better global performance
145+
146+ #### When to Use Write/Read Replicas
147+
148+ Consider implementing Write/Read replicas when:
149+ - Your application has a high read-to-write ratio (e.g., 80% reads, 20% writes)
150+ - You need to scale read operations independently
151+ - You require high availability and disaster recovery
152+ - You have geographically distributed users
153+ - Your application has reporting or analytics features that require heavy read operations
154+
155+ #### Configuration
156+
44157` ` ` yaml
45158spring :
46159 datasource :
@@ -54,18 +167,50 @@ spring:
54167 password : flighttracker
55168` ` `
56169
57- ### Kafka
170+ #### How Datasource Routing Works
58171
59- ` ` ` yaml
60- spring :
61- kafka :
62- bootstrap-servers : localhost:9092
63- consumer :
64- group-id : flight-tracker-group
65- topic :
66- flight-positions : flight-positions
67- ping-created : ping-created
68- ` ` `
172+ The application uses Spring's ` @Transactional` annotation to determine which datasource to use. Here's how it works:
173+
174+ 
175+
176+ 1. **Read Operations**
177+ ` ` ` java
178+ @Transactional(readOnly = true)
179+ public List<Flight> getRecentFlights() {
180+ // This will use the reader datasource
181+ return flightRepository.findAll();
182+ }
183+ ` ` `
184+
185+ 2. **Write Operations**
186+ ` ` ` java
187+ @Transactional
188+ public void saveFlight(Flight flight) {
189+ // This will use the writer datasource
190+ flightRepository.save(flight);
191+ }
192+ ` ` `
193+
194+ 3. **Mixed Operations**
195+ ` ` ` java
196+ @Transactional
197+ public void updateFlightStatus(String flightId, Status newStatus) {
198+ // This will use the writer datasource for the entire method
199+ Flight flight = flightRepository.findById(flightId);
200+ flight.setStatus(newStatus);
201+ flightRepository.save(flight);
202+ }
203+ ` ` `
204+
205+ The routing is handled by :
206+ - `ReadWriteRoutingAspect` : Intercepts `@Transactional` annotations
207+ - `DbContextHolder` : Maintains the current context in a ThreadLocal
208+ - `RoutingDataSource` : Routes the request to the appropriate datasource
209+
210+ **Important Notes:**
211+ - Methods without `@Transactional` will use the writer datasource by default
212+ - Nested transactions inherit the datasource from the outer transaction
213+ - The `readOnly` flag is the key to determining which datasource to use
69214
70215# ## Redis
71216
@@ -145,185 +290,4 @@ The application requires the following external services:
145290
146291# ### Using Docker Compose
147292
148- The project includes a `docker-compose.yml` file that sets up all required services. To manage the services :
149-
150- ` ` ` bash
151- # Start all services
152- docker-compose up -d
153-
154- # Stop all services
155- docker-compose down
156-
157- # View logs for all services
158- docker-compose logs -f
159-
160- # View logs for a specific service
161- docker-compose logs -f redis
162- docker-compose logs -f postgres
163- docker-compose logs -f kafka
164-
165- # Restart a specific service
166- docker-compose restart redis
167- docker-compose restart postgres
168- docker-compose restart kafka
169-
170- # Stop and remove all containers and volumes
171- docker-compose down -v
172- ` ` `
173-
174- # ### Service Details
175-
176- - **Redis**
177- - Port : 6379
178- - No authentication required
179- - Data persistence enabled
180-
181- - **PostgreSQL**
182- - Port : 5432
183- - Database : flighttracker
184- - Username : flighttracker
185- - Password : flighttracker
186- - Schema : flighttracker
187-
188- - **Kafka**
189- - Port : 9092
190- - Auto topic creation enabled
191- - Single broker configuration
192-
193- # ### Manual Service Management
194-
195- If you prefer to manage services individually :
196-
197- ` ` ` bash
198- # Redis
199- docker run -d --name redis -p 6379:6379 redis:7.4
200-
201- # PostgreSQL
202- docker run -d --name postgres \
203- -e POSTGRES_USER=flighttracker \
204- -e POSTGRES_PASSWORD=flighttracker \
205- -e POSTGRES_DB=flighttracker \
206- -p 5432:5432 \
207- postgres:17
208-
209- # Kafka
210- docker run -d --name kafka \
211- -p 9092:9092 \
212- -e KAFKA_BROKER_ID=1 \
213- -e KAFKA_LISTENERS=PLAINTEXT://:9092 \
214- -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
215- -e KAFKA_AUTO_CREATE_TOPICS_ENABLE=true \
216- apache/kafka:4
217- ` ` `
218-
219- # ## Cloning the Repository
220-
221- ` ` ` bash
222- git clone git@github.com:luismr/flight-tracker-event-server-java.git
223- cd flight-tracker-event-server-java
224- ` ` `
225-
226- # ## Building
227-
228- ` ` ` bash
229- mvn clean install
230- ` ` `
231-
232- # # Contributing
233-
234- 1. Fork the repository
235- 2. Create your feature branch (`git checkout -b feature/amazing-feature`)
236- 3. Commit your changes (`git commit -m 'Add some amazing feature'`)
237- 4. Push to the branch (`git push origin feature/amazing-feature`)
238- 5. Open a Pull Request
239-
240- # # GitHub Actions Permissions
241-
242- To enable automatic badge updates and coverage reports, ensure the following GitHub Actions permissions are set :
243-
244- 1. Go to your repository's Settings
245- 2. Navigate to Actions > General
246- 3. Under "Workflow permissions", select :
247- - " Read and write permissions"
248- - " Allow GitHub Actions to create and approve pull requests"
249-
250- # # Read Write Datasource Routing
251-
252- 
253-
254- The application supports read-write splitting for database operations. This feature is disabled by default but can be enabled through configuration.
255-
256- # ## Configuration
257-
258- ` ` ` yaml
259- spring:
260- datasource:
261- writer:
262- jdbcUrl: jdbc:postgresql://localhost:5432/flighttracker
263- username: flighttracker
264- password: flighttracker
265- driverClassName: org.postgresql.Driver
266- type: com.zaxxer.hikari.HikariDataSource
267- reader:
268- jdbcUrl: jdbc:postgresql://localhost:5433/flighttracker
269- username: flighttracker
270- password: flighttracker
271- driverClassName: org.postgresql.Driver
272- type: com.zaxxer.hikari.HikariDataSource
273-
274- app:
275- read-write-routing:
276- enabled: false # Set to true to enable read-write splitting
277- ` ` `
278-
279- # ## Important Notes
280-
281- 1. When enabled, you must configure both write and read data sources
282- 2. The routing is based on Spring's `@Transactional` annotation :
283- - Read operations : Use `@Transactional(readOnly = true)`
284- - Write operations : Use `@Transactional` or `@Transactional(readOnly = false)`
285-
286- 
287-
288- 3. If read-write splitting is enabled but not properly configured, the application will fail to start
289- 4. For development and testing, it's recommended to keep this feature disabled
290- 5. The routing is handled by :
291- - `DatasourceConfig` : Configures the data sources and routing
292- - `RoutingDataSource` : Routes requests to the appropriate data source
293- - `ReadWriteRoutingAspect` : Sets the context based on transaction type
294- - `DbContextHolder` : Thread-local holder for the current context
295-
296- # # Project Structure
297-
298- ```
299- src/
300- ├── main/
301- │ ├── java/
302- │ │ └── dev/luismachadoreis/flighttracker/server/
303- │ │ ├── common/ # Common infrastructure and utilities
304- │ │ ├── flightdata/ # Flight data processing
305- │ │ └── ping/ # Ping domain and API
306- │ └── resources/
307- │ ├── application.yml # Main configuration
308- │ └── application-test.yml # Test configuration
309- └── test/
310- └── java/
311- └── dev/luismachadoreis/flighttracker/server/
312- ├── common/ # Common infrastructure tests
313- ├── flightdata/ # Flight data tests
314- └── ping/ # Ping domain and API tests
315- ```
316-
317- ## Analysis and Decisions
318-
319- ### Architecture Decision Records (ADRs)
320-
321- * [ADR-001: WebSocket Notification Scalability Strategy](docs/adrs/adr-001-websocket-scalability.md) - Decision to implement a Kafka-based event distribution system for WebSocket notifications, with a path to future STOMP migration.
322-
323- ### Technical Analysis
324-
325- * [WebSocket Notification Scalability](docs/analysis/technical-analysis-websocket-flight-tracker.md) - Analysis of WebSocket notification delivery alternatives, focusing on scalability, latency, and reliability requirements.
326-
327- ## License
328-
329- This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details.
293+ The project includes a `docker-compose.yml`
0 commit comments