In this project, we will implement two Spring Boot Java Web application called streamer-data-jpa and streamer-data-r2dbc. They both will fetch 1 million of customer records from MySQL and stream them to Kafka. The main goal is to compare the application's performance and resource utilization.
On ivangfr.github.io, I have compiled my Proof-of-Concepts (PoCs) and articles. You can easily search for the technology you are interested in by using the filter. Who knows, perhaps I have already implemented a PoC or written an article about what you are looking for.
- 
Spring BootWeb Java application that connects toMySQLusingSpring Data JPAand toKafka.It provides some endpoints such as: - PATCH api/customers/stream-naive[?limit=x]: to stream customer records using a naive implementation with- Spring Data JPA.
- PATCH api/customers/stream[?limit=x]: to stream customer records using an improved implementation with- Java 8 Streamsand- Spring Data JPAas explained in this article.
- PATCH api/customers/load?amount=x: to create a specific number of random customer records.
 
- 
Spring BootWeb Java application that connects toMySQLusingSpring Data R2DBCand toKafka.It provides some endpoints such as: - PATCH api/customers/stream[?limit=x]: to stream customer records.
- PATCH api/customers/load?amount=x: to create a specific number of random customer records.
 
- 
Open a terminal and inside the spring-data-jpa-r2dbc-mysql-stream-million-recordsroot folder run:docker compose up -d 
- 
Wait for Docker containers to be up and running. To check it, run: docker ps -a 
- 
Once MySQL,Kafka, andZookeeperare up and running, run the following scripts:- 
To create two Kafkatopics:./init-kafka-topics.sh 
- 
To initialize the MySQLdatabase and to create twoKafkatopics:./init-mysql-db.sh 1M Note: You can provide the following load amounts: 0, 100k, 200k, 500k, or 1M. 
 
- 
Inside the spring-data-jpa-r2dbc-mysql-stream-million-records, run the following Maven commands in different terminals:
- 
streamer-data-jpa ./mvnw clean spring-boot:run --projects streamer-data-jpa 
- 
streamer-data-r2dbc ./mvnw clean spring-boot:run --projects streamer-data-r2dbc 
- 
- In a terminal, make sure you are in the spring-data-jpa-r2dbc-mysql-stream-million-recordsroot folder.
- Run the following script to build the Docker images:
./build-docker-images.sh 
 
- In a terminal, make sure you are in the 
- 
- 
streamer-data-jpa Environment Variable Description MYSQL_HOSTSpecify host of the MySQLdatabase to use (defaultlocalhost)MYSQL_PORTSpecify port of the MySQLdatabase to use (default3306)KAFKA_HOSTSpecify host of the Kafkamessage broker to use (defaultlocalhost)KAFKA_PORTSpecify port of the Kafkamessage broker to use (default29092)
- 
streamer-data-r2dbc Environment Variable Description MYSQL_HOSTSpecify host of the MySQLdatabase to use (defaultlocalhost)MYSQL_PORTSpecify port of the MySQLdatabase to use (default3306)KAFKA_HOSTSpecify host of the Kafkamessage broker to use (defaultlocalhost)KAFKA_PORTSpecify port of the Kafkamessage broker to use (default29092)
 
- 
- 
Run the following docker runcommands in different terminals:- 
streamer-data-jpa docker run --rm --name streamer-data-jpa -p 9080:9080 \ -e MYSQL_HOST=mysql -e KAFKA_HOST=kafka -e KAFKA_PORT=9092 \ --network spring-data-jpa-r2dbc-mysql-stream-million-records_default \ ivanfranchin/streamer-data-jpa:1.0.0 
- 
streamer-data-r2dbc docker run --rm --name streamer-data-r2dbc -p 9081:9081 \ -e MYSQL_HOST=mysql -e KAFKA_HOST=kafka -e KAFKA_PORT=9092 \ --network spring-data-jpa-r2dbc-mysql-stream-million-records_default \ ivanfranchin/streamer-data-r2dbc:1.0.0 
 
- 
Previously, during Start Environment step, we initialized MySQL with 1 million customer records.
- 
Running applications with Maven We will use JConsoletool. To run it, open a new terminal and run:jconsole 
- 
Running applications as Docker containers We will use the cAdvisortool. In a browser, access:- To explore the running containers: http://localhost:8080/docker/
- To go directly to a specific container:
- streamer-data-jpa: http://localhost:8080/docker/streamer-data-jpa
- streamer-data-r2dbc: http://localhost:8080/docker/streamer-data-r2dbc
 
 
In another terminal, call the following curl commands to trigger the streaming of customer records from MySQL to Kafka. At the end of the curl command, the total time it took (in seconds) to process will be displayed.
We can monitor the amount of messages and the messages themselves being streamed using Kafdrop – Kafka Web UI at http://localhost:9000
- 
streamer-data-jpa Naive implementation curl -w "Response Time: %{time_total}s" -s -X PATCH localhost:9080/api/customers/stream-naiveBetter implementation curl -w "Response Time: %{time_total}s" -s -X PATCH localhost:9080/api/customers/stream
- 
streamer-data-r2dbc curl -w "Response Time: %{time_total}s" -s -X PATCH localhost:9081/api/customers/stream
A simulation sample running the applications with Maven and using the JConsole tool:
- 
streamer-data-jpa Naive implementation Response Time: 414.486126sBetter implementation Response Time: 453.692525s
- 
streamer-data-r2dbc Response Time: 476.951654s
- 
Kafdrop Kafdropcan be accessed at http://localhost:9001
- 
MySQL monitor To check data in the customerdbdatabase:docker exec -it -e MYSQL_PWD=secret mysql mysql -uroot --database customerdb SELECT count(*) FROM customer; To create a dump from the customertable in thecustomerdbdatabase, make sure you are in thespring-data-jpa-r2dbc-mysql-stream-million-recordsroot folder and run./dump-mysql-db.sh 
- To stop streamer-data-jpaandstreamer-data-r2dbc, go to the terminals where they are running and pressCtrl+C.
- To stop and remove Docker Compose containers, network, and volumes, go to a terminal and, inside the spring-data-jpa-r2dbc-mysql-stream-million-recordsroot folder, run the command below:docker compose down -v 
To remove all Docker images created by this project, go to a terminal and, inside the spring-data-jpa-r2dbc-mysql-stream-million-records root folder, run the following script:
./remove-docker-images.sh


