diff --git a/README.md b/README.md index f4a6cf5..4467bcf 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ The accompanying code for this workshop is [on Github](http://github.com/joshlon - In this workshop you'll need the latest Java version. Java 8 is the baseline for this workshop. - You'll need a newer, 3.1, version of Apache Maven installed. - You'll need an IDE installed. Something like Apache NetBeans, Eclipse, or IntelliJ IDEA. -- You might want to use the [the Spring Boot CLI](http://docs.spring.io/autorepo/docs/spring-boot/current/reference/html/getting-started-installing-spring-boot.html#getting-started-installing-the-cli) and [the Spring Cloud CLI](https://github.com/spring-cloud/spring-cloud-cli). Neither is required but you could use them to replace a lot of code, later. +- You might want to use [the Spring Boot CLI](http://docs.spring.io/autorepo/docs/spring-boot/current/reference/html/getting-started-installing-spring-boot.html#getting-started-installing-the-cli) and [the Spring Cloud CLI](https://github.com/spring-cloud/spring-cloud-cli). Neither is required but you could use them to replace a lot of code, later. - [Install the Cloud Foundry CLI](https://docs.cloudfoundry.org/devguide/installcf/install-go-cli.html) - Go to the [Spring Initializr](http://start.spring.io) and use the latest stable version of Spring Boot. If you are doing this in a workshop setting where internet connectivity is constrained, you'll want to pre-cache the Maven dependencies before starting. Go to the Spring Initializr and choose EVERY checkbox except those related to AWS, Zookeeper, or Consul, then click _Generate_. In the shell, run `mvn -DskipTests=true clean install` to force the resolution of all those dependencies so you're not stalled later. Then, run `mvn clean install` to force the resolution of the test scoped dependencies. You may discard this project after you've run the commands. This will download whatever artifacts are most current to your local Maven repository (usually, `.m2/repository`). - _For multi-day workshops only_: Run each of the `.sh` scripts in the `./bin` directory; run `psql.sh` after you've run `postgresh.sh` and confirm that they all complete and emit no obvious errors @@ -109,7 +109,7 @@ In the `reservation-service`, create a `MessageRestController` and annotate it w Trigger a refresh of the message using the `/refresh` endpoint. -**EXTRA CREDIT**: Install RabbitMQ server and connect the microservice to the the Spring Cloud Stream-based event bus and then triggering the refresh using the `/bus/refresh`. +**EXTRA CREDIT**: Install RabbitMQ server and connect the microservice to the Spring Cloud Stream-based event bus and then triggering the refresh using the `/bus/refresh`. ## 4. Service Registration and Discovery @@ -232,7 +232,7 @@ _Multi-day workshop_: ## 9. Consumer Driven Contract Testing -> we've built a trivial API with an even more trivial client (thanks to the `RestTemplate` or `Feign`). We've done a good job on day one of our journey. What happens on day two or at any point down the line after the API has changed but the client that uses it has updated accordingly? What happens when the producer of the API changes the API? Does this break the client? It's important that we capture such breaking changes as early and often as possible. In a monolithic application the incompatible updates to the producer of an API would be caught on the first compile. Refactoring would help us prevent these problems, as well. In a distributed systems world, these incompatible changes are harder to catch. They get caught in the integration tests. integration tests are among the slowest of the tests you should have in your system. They're towards the top of the testing pyramid because they're _expensive_ - both in terms of time and computational resources. In order to run the tests we'd need to run both client and service and all supporting infrastructure. This is a worst-case scenario; organizations move to microservices to accelerate feedback (which in turn yields learning and improvement), _not_ to reduce it! What we need is some way to capture breaking changes that keeps both producer and consumer in sync _and_ that doesn't constrain velocity of feedback. Spring Cloud Contract, and consumer driven contracts and consumer driven contract testing, make this work easier. The idea is that contract definitions are used to capture the expected behavior of an API for a particular client. This may include all the quirks of particular clients, and it may incluhde older clients using older APIs. A producer may capture as many contract scenarios as needed. These contracts are enforced bilaterally. On the producer side, the Spring Cloud Contract verifier turns the contract into a Spring MVC Test Framework test that fails if the actual API doesn't work as the contract stipulates. On the consumer, clients can run test against actual HTTP (or messaging-based) APIs that are themselves stubs. These stubs are _stubs_ - that is, there's no real business logic behind them. Just preconfigured responses defined by the contracts. As the stub is defined entirely by the contract, it is trivially cheap to run the stub APIs and exercise clients against them. As the stubs are only ever available _if_ the producer passes all its tests, this ensures that the client is building and testing against a reflection of the latest and actual API, _not_ the understanding of the API implied when the client test was originally written. +> we've built a trivial API with an even more trivial client (thanks to the `RestTemplate` or `Feign`). We've done a good job on day one of our journey. What happens on day two or at any point down the line after the API has changed but the client that uses it has updated accordingly? What happens when the producer of the API changes the API? Does this break the client? It's important that we capture such breaking changes as early and often as possible. In a monolithic application the incompatible updates to the producer of an API would be caught on the first compile. Refactoring would help us prevent these problems, as well. In a distributed systems world, these incompatible changes are harder to catch. They get caught in the integration tests. integration tests are among the slowest of the tests you should have in your system. They're towards the top of the testing pyramid because they're _expensive_ - both in terms of time and computational resources. In order to run the tests we'd need to run both client and service and all supporting infrastructure. This is a worst-case scenario; organizations move to microservices to accelerate feedback (which in turn yields learning and improvement), _not_ to reduce it! What we need is some way to capture breaking changes that keeps both producer and consumer in sync _and_ that doesn't constrain velocity of feedback. Spring Cloud Contract, and consumer driven contracts and consumer driven contract testing, make this work easier. The idea is that contract definitions are used to capture the expected behavior of an API for a particular client. This may include all the quirks of particular clients, and it may include older clients using older APIs. A producer may capture as many contract scenarios as needed. These contracts are enforced bilaterally. On the producer side, the Spring Cloud Contract verifier turns the contract into a Spring MVC Test Framework test that fails if the actual API doesn't work as the contract stipulates. On the consumer, clients can run test against actual HTTP (or messaging-based) APIs that are themselves stubs. These stubs are _stubs_ - that is, there's no real business logic behind them. Just preconfigured responses defined by the contracts. As the stub is defined entirely by the contract, it is trivially cheap to run the stub APIs and exercise clients against them. As the stubs are only ever available _if_ the producer passes all its tests, this ensures that the client is building and testing against a reflection of the latest and actual API, _not_ the understanding of the API implied when the client test was originally written. - first we'll define a contract for our producer. create a `src/test/resources/contracts` directory in the `reservation-service`. - then, define a contract to capture a scenario, `src/test/resources/contracts/shouldReturnAllReservations.groovy`. In our service, the scenario is that we want to view the collection of `Reservation` records when we hit the `/reservations` endpoint with an HTTP `GET` call. @@ -259,7 +259,7 @@ _Multi-day workshop_: - When a build succeeds, with `mvn clean install`, the build contributes a `.pom`, `.jar` and, thanks to the Maven plugin we just configured, an artifact ending in `-stub.jar`. This last artifact contains the definition from our contract that we care about. It is this stub that we'll use with our client. - in the client code, create a new test and test the Feign interface by injecting it and asserting that the client returns data that we've stubbed out in the contract. - configure the client test with `@AutoconfgureStubRunner(..)`, pointing the client test to the Maven coordinates for the contract. - - As configured, the client test will spin up a WireMock-based API that's pre-programmed to respond according to the contract. In this case it'll return the two names we specifed in the contract definition when somebody visits `/reservations`. It'll run for the life of the client test, and no longer. It's an actual HTTP API, against which we can make client-side invocations. It does not, however, need all the supporting infrastructure to work. This makes it markedly cheaper, computationally and clock-time wise, to run. + - As configured, the client test will spin up a WireMock-based API that's pre-programmed to respond according to the contract. In this case it'll return the two names we specified in the contract definition when somebody visits `/reservations`. It'll run for the life of the client test, and no longer. It's an actual HTTP API, against which we can make client-side invocations. It does not, however, need all the supporting infrastructure to work. This makes it markedly cheaper, computationally and clock-time wise, to run. - the only fly in the ointment is that, so far, we *still* need the service registry, Eureka. Let's stub that out, as well. In the `reservation-client/src/test/resources` directory, create an `application.properties` property file. In the property file, disable Eureka (`eureka.client.enabled=false`). - We'll also need to stub out the `DiscoveryClient` that our code depends on. This is easy enough using `stubrunner.ids-to-service-ids.reservation-service=reservation-service`. Thusly configured, we map the service (as it's registered in the registry) to the `-stub.jar` artifact ID. - if we run the tests on the client, everything should be green, and quick. @@ -307,7 +307,7 @@ _Multi-day workshop_: `org.springframework.cloud`:`spring-cloud-starter-config`, `org.springframework.cloud`:`spring-cloud-starter-stream-binder-rabbit`, `org.springframework.boot`:`spring-boot-starter-actuator`, `net.logstash.logback`:`logstash-logback-encoder`:`4.2`, - extract all the repeated code into auto-configuration: the `AlwaysSampler` bean, `@EnableDiscoveryClient`, the custom `HealthIndicator`s. - **EXTRA CREDIT**: define a Logger that is in turn a bean defined using Spring Framework's support for `InjectionPoint`s. You can qualify this bean with a custom qualifier (`@Logger`). -- **EXTRA CREDIT**: customize the Spring Initializr. The Spring Initializr is itself an open-source project. You can find the code for [the Spring Initializr on Github](https://github.com/spring-io/initializr). It is itself an auto-configuration. build and install the Spring Initializr and then create a new Spring Boot application. Add the Initializr dependency to your new Spring Boot application and then configure which checkboxes are shown by overriding the configuration in `application.properties` or, more likely, `appication.yml`. Now you have your own Spring Initializr, with your own checkboxes and auto-configurations. Host this on Cloud Foundry (or anywhere, really) and point people in your organization to it for all their new-project needs. +- **EXTRA CREDIT**: customize the Spring Initializr. The Spring Initializr is itself an open-source project. You can find the code for [the Spring Initializr on Github](https://github.com/spring-io/initializr). It is itself an auto-configuration. build and install the Spring Initializr and then create a new Spring Boot application. Add the Initializr dependency to your new Spring Boot application and then configure which checkboxes are shown by overriding the configuration in `application.properties` or, more likely, `application.yml`. Now you have your own Spring Initializr, with your own checkboxes and auto-configurations. Host this on Cloud Foundry (or anywhere, really) and point people in your organization to it for all their new-project needs. ## 12. Log Aggregation and Analysis with ELK