Skip to content

Commit b1683a0

Browse files
committed
Merge branch 'release/4.0'
2 parents add5fc3 + 09c0e9b commit b1683a0

File tree

55 files changed

+2032
-867
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+2032
-867
lines changed

README.md

+79-12
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ The following topics are going to be covered in this 1st stage (other stages top
3939
- Adding automated tests of microservices in isolation.
4040
- Adding semi-automated tests to a microservice landscape.
4141

42-
### System Boundary - μServices Landscape (Release 3)
42+
### System Boundary - μServices Landscape (Release 4)
4343

4444
![System Boundary](docs/stage1/app_ms_landscape.png)
4545

@@ -69,7 +69,7 @@ I recommend that you work with your Java code using an IDE that supports the dev
6969

7070
All that you want to do is just fire up your IDE **->** open or import the parent folder `springy-store-microservices` and everything will be ready for you.
7171

72-
## Playing With Spring Store Project
72+
## Playing With Springy Store Project
7373

7474
### Cloning It
7575

@@ -156,29 +156,94 @@ All build commands and test suite for each microservice should run successfully,
156156
```
157157

158158
### Running Them All
159-
Now it's the time to run all of them, and it's very simple just run the following *<u>docker compose</u>* commands:
159+
#### Using RabbitMQ without the use of partitions
160+
Now it's the time to run all of our reactive Microservices, and it's very simple just run the
161+
following
162+
`docker-compose` commands:
160163

161164
```bash
162165
mohamed.taman@DTLNV8 ~/springy-store-microservices
163166
λ docker-compose -p ssm up -d
164167
```
165168

166-
All the **services** and **databases** will run in parallel in detached mode (option `-d`), and their output will be printed to the console as the following:
169+
All the **services**, **databases**, and **messaging service** will run in parallel in detach
170+
mode
171+
(option `-d`), and
172+
command output will print to the console the following:
167173

168174
```bash
169175
Creating network "ssm_default" with the default driver
170-
Creating ssm_mysql_1 ... done
171-
Creating ssm_mongodb_1 ... done
172-
Creating ssm_store_1 ... done
176+
Creating ssm_mysql_1 ... done
177+
Creating ssm_mongodb_1 ... done
178+
Creating ssm_rabbitmq_1 ... done
179+
Creating ssm_store_1 ... done
173180
Creating ssm_review_1 ... done
174181
Creating ssm_product_1 ... done
175182
Creating ssm_recommendation_1 ... done
176183
```
177184

178185
### Access Store APIs
179-
You can manually test `Store Service` APIs through out its **Swagger** interface at the following
186+
You can manually test `Store Service` APIs throughout its **Swagger** interface at the following
180187
URL [http://localhost:8080/swagger-ui.html](http://localhost:8080/swagger-ui.html).
181-
188+
#### Access RabbitMQ
189+
In browser point to this URL [http://localhost:5672/](http://localhost:5672/) `username: guest
190+
` and `password: guest`, and you can see all **topics**, **DLQs**, **partitions**, and payload.
191+
192+
1. For running 2 instances of each service and using _RabbitMQ with two partitions per topic_, use
193+
the following
194+
`docker-compose` command:
195+
```bash
196+
mohamed.taman@DTLNV8 ~/springy-store-microservices
197+
λ docker-compose -p ssm -f docker-compose-partitions.yml up -d
198+
```
199+
1. To use _Kafka and Zookeeper with two partitions per topic_ run the following
200+
command:
201+
```bash
202+
mohamed.taman@DTLNV8 ~/springy-store-microservices
203+
λ docker-compose -p ssm -f docker-compose-kafka.yml up -d
204+
```
205+
206+
#### Check All Services Health
207+
From Store front Service we can check all the core services health, when you have all the
208+
microservices up and running using Docker Compose,
209+
```bash
210+
mohamed.taman@DTLNV8 ~/springy-store-microservices
211+
λ curl http://localhost:8080/actuator/health -s | jq .
212+
```
213+
This will result in the following response:
214+
```json
215+
{
216+
"status":"UP",
217+
"components":{
218+
"Core System Microservices":{
219+
"status":"UP",
220+
"components":{
221+
"Product Service":{
222+
"status":"UP"
223+
},
224+
"Recommendation Service":{
225+
"status":"UP"
226+
},
227+
"Review Service":{
228+
"status":"UP"
229+
}
230+
}
231+
},
232+
"diskSpace":{
233+
"status":"UP",
234+
"details":{
235+
"total":255382777856,
236+
"free":86618931200,
237+
"threshold":10485760,
238+
"exists":true
239+
}
240+
},
241+
"ping":{
242+
"status":"UP"
243+
}
244+
}
245+
}
246+
```
182247
### Testing Them All
183248
Now it's time to test all the application functionality as one part. To do so just run
184249
the following automation test script:
@@ -188,7 +253,7 @@ mohamed.taman@DTLNV8 ~/springy-store-microservices
188253
λ ./test-em-all.sh
189254
```
190255

191-
The result should be something like this:
256+
The result will look like this:
192257

193258
```bash
194259
Starting [Springy Store] full functionality testing....
@@ -227,10 +292,10 @@ Finally, to close the story, we will need to shut down Microservices manually se
227292

228293
```bash
229294
mohamed.taman@DTLNV8 ~/springy-store-microservices
230-
λ docker-compose -p ssm down
295+
λ docker-compose -p ssm down --remove-orphans
231296
```
232297

233-
And the output should be as the following:
298+
And you should see output like the following:
234299

235300
```bash
236301
Stopping ssm_recommendation_1 ... done
@@ -239,12 +304,14 @@ Stopping ssm_review_1 ... done
239304
Stopping ssm_mongodb_1 ... done
240305
Stopping ssm_store_1 ... done
241306
Stopping ssm_mysql_1 ... done
307+
Stopping ssm_rabbitmq_1 ... done
242308
Removing ssm_recommendation_1 ... done
243309
Removing ssm_product_1 ... done
244310
Removing ssm_review_1 ... done
245311
Removing ssm_mongodb_1 ... done
246312
Removing ssm_store_1 ... done
247313
Removing ssm_mysql_1 ... done
314+
Removing ssm_rabbitmq_1 ... done
248315
Removing network ssm_default
249316
```
250317

docker-compose-kafka.yml

+169
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
version: '3.7' ## Latest version works with Docker Engine release 18.06.0+
2+
3+
services:
4+
## Start - Product service definition
5+
### Instance 1
6+
product:
7+
build: product-service
8+
environment:
9+
- SPRING_PROFILES_ACTIVE=docker
10+
- MANAGEMENT_HEALTH_RABBIT_ENABLED=false
11+
- SPRING_CLOUD_STREAM_DEFAULTBINDER=kafka
12+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_PARTITIONED=true
13+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCECOUNT=2
14+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCEINDEX=0
15+
depends_on:
16+
- mongodb
17+
- kafka
18+
### Instance 2
19+
product-i1:
20+
build: product-service
21+
environment:
22+
- SPRING_PROFILES_ACTIVE=docker
23+
- MANAGEMENT_HEALTH_RABBIT_ENABLED=false
24+
- SPRING_CLOUD_STREAM_DEFAULTBINDER=kafka
25+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_PARTITIONED=true
26+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCECOUNT=2
27+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCEINDEX=1
28+
depends_on:
29+
- mongodb
30+
- kafka
31+
## End - Product service definition
32+
33+
## Start - Recommendation service definition
34+
### Instance 1
35+
recommendation:
36+
build: recommendation-service
37+
environment:
38+
- SPRING_PROFILES_ACTIVE=docker
39+
- MANAGEMENT_HEALTH_RABBIT_ENABLED=false
40+
- SPRING_CLOUD_STREAM_DEFAULTBINDER=kafka
41+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_PARTITIONED=true
42+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCECOUNT=2
43+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCEINDEX=0
44+
depends_on:
45+
- mongodb
46+
- kafka
47+
### Instance 2
48+
recommendation-i1:
49+
build: recommendation-service
50+
environment:
51+
- SPRING_PROFILES_ACTIVE=docker
52+
- MANAGEMENT_HEALTH_RABBIT_ENABLED=false
53+
- SPRING_CLOUD_STREAM_DEFAULTBINDER=kafka
54+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_PARTITIONED=true
55+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCECOUNT=2
56+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCEINDEX=1
57+
depends_on:
58+
- mongodb
59+
- kafka
60+
## End - Recommendation service definition
61+
62+
## Start - Review service definition
63+
### Instance 1
64+
review:
65+
build: review-service
66+
environment:
67+
- SPRING_PROFILES_ACTIVE=docker
68+
- MANAGEMENT_HEALTH_RABBIT_ENABLED=false
69+
- SPRING_CLOUD_STREAM_DEFAULTBINDER=kafka
70+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_PARTITIONED=true
71+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCECOUNT=2
72+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCEINDEX=0
73+
depends_on:
74+
- mysql
75+
- kafka
76+
restart: on-failure
77+
### Instance 2
78+
review-i1:
79+
build: review-service
80+
environment:
81+
- SPRING_PROFILES_ACTIVE=docker
82+
- MANAGEMENT_HEALTH_RABBIT_ENABLED=false
83+
- SPRING_CLOUD_STREAM_DEFAULTBINDER=kafka
84+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_PARTITIONED=true
85+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCECOUNT=2
86+
- SPRING_CLOUD_STREAM_BINDINGS_INPUT_CONSUMER_INSTANCEINDEX=1
87+
depends_on:
88+
- mysql
89+
- kafka
90+
restart: on-failure
91+
## End - Review service definition
92+
93+
## Start - Store service definition
94+
store:
95+
build: store-service
96+
ports:
97+
- "8080:8080"
98+
environment:
99+
- SPRING_PROFILES_ACTIVE=docker
100+
- MANAGEMENT_HEALTH_RABBIT_ENABLED=false
101+
- SPRING_CLOUD_STREAM_DEFAULTBINDER=kafka
102+
- SPRING_CLOUD_STREAM_BINDINGS_OUTPUT-PRODUCTS_PRODUCER_PARTITION-KEY-EXPRESSION=payload.key
103+
- SPRING_CLOUD_STREAM_BINDINGS_OUTPUT-PRODUCTS_PRODUCER_PARTITION-COUNT=2
104+
- SPRING_CLOUD_STREAM_BINDINGS_OUTPUT-RECOMMENDATIONS_PRODUCER_PARTITION-KEY-EXPRESSION=payload.key
105+
- SPRING_CLOUD_STREAM_BINDINGS_OUTPUT-RECOMMENDATIONS_PRODUCER_PARTITION-COUNT=2
106+
- SPRING_CLOUD_STREAM_BINDINGS_OUTPUT-REVIEWS_PRODUCER_PARTITION-KEY-EXPRESSION=payload.key
107+
- SPRING_CLOUD_STREAM_BINDINGS_OUTPUT-REVIEWS_PRODUCER_PARTITION-COUNT=2
108+
depends_on:
109+
- kafka
110+
## End - Store service definition
111+
112+
## Start - mongodb database definition
113+
### $ mongo
114+
mongodb:
115+
image: mongo:4.2.5-bionic
116+
ports:
117+
- "27017-27019:27017-27019"
118+
healthcheck:
119+
test: "mongo --eval 'db.stats().ok'"
120+
interval: 10s
121+
timeout: 10s
122+
retries: 5
123+
start_period: 40s
124+
restart: on-failure
125+
## End - mongodb database definition
126+
127+
## Start - MySql database definition
128+
### $ mysql -uroot -h127.0.0.1 -p
129+
mysql:
130+
image: mysql:8.0.19
131+
ports:
132+
- "3306:3306"
133+
environment:
134+
- MYSQL_ROOT_PASSWORD=rootpwd
135+
- MYSQL_DATABASE=review-db
136+
- MYSQL_USER=user
137+
- MYSQL_PASSWORD=pwd
138+
- MYSQL_ROOT_HOST=%
139+
healthcheck:
140+
test: "/usr/bin/mysql --user=user --password=pwd --execute \"SHOW DATABASES;\""
141+
interval: 10s
142+
timeout: 5s
143+
retries: 10
144+
restart: on-failure
145+
## End - MySql database definition
146+
147+
## Start - Kafka Messaging service
148+
kafka:
149+
image: wurstmeister/kafka:latest
150+
ports:
151+
- "9092:9092"
152+
environment:
153+
- KAFKA_ADVERTISED_HOST_NAME=kafka
154+
- KAFKA_ADVERTISED_PORT=9092
155+
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
156+
depends_on:
157+
- zookeeper
158+
restart: on-failure
159+
## End - Kafka Messaging service
160+
161+
## Start - Zookeeper (Kafka) cluster management service
162+
zookeeper:
163+
image: wurstmeister/zookeeper:latest
164+
ports:
165+
- "2181:2181"
166+
environment:
167+
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
168+
restart: on-failure
169+
## End - Zookeeper cluster management service

0 commit comments

Comments
 (0)