Skip to content

Commit 7c8fe26

Browse files
committed
Add example with multiple LLM providers
1 parent 14d9bf5 commit 7c8fe26

File tree

9 files changed

+251
-5
lines changed

9 files changed

+251
-5
lines changed
+63
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Chat Models: Multiple Providers
2+
3+
Text generation with LLMs via multiple providers.
4+
5+
## Description
6+
7+
Spring AI provides a `ChatModel` abstraction for integrating with LLMs via several providers.
8+
This example shows how to use both OpenAI and Mistral AI in the same application.
9+
10+
## Running the application
11+
12+
The application relies on OpenAI API and Mistral AI API for providing LLMs.
13+
14+
### When using OpenAI and Mistral AI
15+
16+
First, make sure you have an [OpenAI account](https://platform.openai.com/signup).
17+
Then, define an environment variable with the OpenAI API Key associated to your OpenAI account as the value.
18+
19+
```shell
20+
export SPRING_AI_OPENAI_API_KEY=<INSERT KEY HERE>
21+
```
22+
23+
You also need a [Mistral AI account](https://console.mistral.ai).
24+
Then, define an environment variable with the Mistral AI API Key associated to your Mistral AI account as the value.
25+
26+
```shell
27+
export SPRING_AI_MISTRALAI_API_KEY=<INSERT KEY HERE>
28+
```
29+
30+
Finally, run the Spring Boot application.
31+
32+
```shell
33+
./gradlew bootRun
34+
```
35+
36+
## Calling the application
37+
38+
You can now call the application that will use either OpenAI or Mistral AI to generate text based on a default prompt.
39+
This example uses [httpie](https://httpie.io) to send HTTP requests.
40+
41+
Using OpenAI:
42+
43+
```shell
44+
http :8080/chat/openai message=="What is the capital of Italy?"
45+
```
46+
47+
Using Mistral AI:
48+
49+
```shell
50+
http :8080/chat/mistral message=="What is the capital of Italy?"
51+
```
52+
53+
The next request is configured with OpenAI-specific customizations.
54+
55+
```shell
56+
http :8080/chat/openai-options message=="Why is a raven like a writing desk? Give a short answer."
57+
```
58+
59+
The next request is configured with Mistral AI-specific customizations.
60+
61+
```shell
62+
http :8080/chat/mistral-options message=="Why is a raven like a writing desk? Give a short answer."
63+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
plugins {
2+
id 'java'
3+
id 'org.springframework.boot'
4+
id 'io.spring.dependency-management'
5+
}
6+
7+
group = 'com.thomasvitale'
8+
version = '0.0.1-SNAPSHOT'
9+
10+
java {
11+
toolchain {
12+
languageVersion = JavaLanguageVersion.of(22)
13+
}
14+
}
15+
16+
repositories {
17+
mavenCentral()
18+
maven { url 'https://repo.spring.io/milestone' }
19+
maven { url 'https://repo.spring.io/snapshot' }
20+
}
21+
22+
dependencies {
23+
implementation platform("org.springframework.ai:spring-ai-bom:${springAiVersion}")
24+
25+
implementation 'org.springframework.boot:spring-boot-starter-web'
26+
implementation 'org.springframework.ai:spring-ai-mistral-ai-spring-boot-starter'
27+
implementation 'org.springframework.ai:spring-ai-openai-spring-boot-starter'
28+
29+
testAndDevelopmentOnly 'org.springframework.boot:spring-boot-devtools'
30+
31+
testImplementation 'org.springframework.boot:spring-boot-starter-test'
32+
testImplementation 'org.springframework.boot:spring-boot-testcontainers'
33+
testImplementation 'org.testcontainers:junit-jupiter'
34+
}
35+
36+
tasks.named('test') {
37+
useJUnitPlatform()
38+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
package com.thomasvitale.ai.spring;
2+
3+
import org.springframework.ai.chat.prompt.Prompt;
4+
import org.springframework.ai.mistralai.MistralAiChatModel;
5+
import org.springframework.ai.mistralai.MistralAiChatOptions;
6+
import org.springframework.ai.openai.OpenAiChatModel;
7+
import org.springframework.ai.openai.OpenAiChatOptions;
8+
import org.springframework.web.bind.annotation.GetMapping;
9+
import org.springframework.web.bind.annotation.RequestParam;
10+
import org.springframework.web.bind.annotation.RestController;
11+
12+
@RestController
13+
class ChatController {
14+
15+
private final MistralAiChatModel mistralAiChatModel;
16+
private final OpenAiChatModel openAiChatModel;
17+
18+
ChatController(MistralAiChatModel mistralAiChatModel, OpenAiChatModel openAiChatModel) {
19+
this.mistralAiChatModel = mistralAiChatModel;
20+
this.openAiChatModel = openAiChatModel;
21+
}
22+
23+
@GetMapping("/chat/mistral")
24+
String chatMistralAi(@RequestParam(defaultValue = "What did Gandalf say to the Balrog?") String message) {
25+
return mistralAiChatModel.call(message);
26+
}
27+
28+
@GetMapping("/chat/openai")
29+
String chatOpenAi(@RequestParam(defaultValue = "What did Gandalf say to the Balrog?") String message) {
30+
return openAiChatModel.call(message);
31+
}
32+
33+
@GetMapping("/chat/mistral-options")
34+
String chatWithMistralAiOptions(@RequestParam(defaultValue = "What did Gandalf say to the Balrog?") String message) {
35+
return mistralAiChatModel.call(new Prompt(message, MistralAiChatOptions.builder()
36+
.withModel("open-mixtral-8x7b")
37+
.withTemperature(1.0f)
38+
.build()))
39+
.getResult().getOutput().getContent();
40+
}
41+
42+
@GetMapping("/chat/openai-options")
43+
String chatWithOpenAiOptions(@RequestParam(defaultValue = "What did Gandalf say to the Balrog?") String message) {
44+
return openAiChatModel.call(new Prompt(message, OpenAiChatOptions.builder()
45+
.withModel("gpt-4-turbo")
46+
.withTemperature(1.0f)
47+
.build()))
48+
.getResult().getOutput().getContent();
49+
}
50+
51+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
package com.thomasvitale.ai.spring;
2+
3+
import org.springframework.boot.SpringApplication;
4+
import org.springframework.boot.autoconfigure.SpringBootApplication;
5+
6+
@SpringBootApplication
7+
public class ChatModelsMultipleProvidersApplication {
8+
9+
public static void main(String[] args) {
10+
SpringApplication.run(ChatModelsMultipleProvidersApplication.class, args);
11+
}
12+
13+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
spring:
2+
ai:
3+
mistralai:
4+
api-key: ${MISTRALAI_API_KEY}
5+
chat:
6+
options:
7+
model: open-mistral-7b
8+
temperature: 0.7
9+
openai:
10+
api-key: ${OPENAI_API_KEY}
11+
chat:
12+
options:
13+
model: gpt-3.5-turbo
14+
temperature: 0.7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
package com.thomasvitale.ai.spring;
2+
3+
import org.junit.jupiter.api.Test;
4+
import org.junit.jupiter.api.condition.EnabledIfEnvironmentVariable;
5+
import org.springframework.beans.factory.annotation.Autowired;
6+
import org.springframework.boot.test.autoconfigure.web.reactive.AutoConfigureWebTestClient;
7+
import org.springframework.boot.test.context.SpringBootTest;
8+
import org.springframework.test.web.reactive.server.WebTestClient;
9+
10+
import static org.assertj.core.api.Assertions.assertThat;
11+
12+
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
13+
@AutoConfigureWebTestClient(timeout = "60s")
14+
class ChatModelsMultipleProvidersApplicationTests {
15+
16+
@Autowired
17+
WebTestClient webTestClient;
18+
19+
@Test
20+
@EnabledIfEnvironmentVariable(named = "SPRING_AI_MISTRALAI_API_KEY", matches = ".*")
21+
void chatMistralAI() {
22+
webTestClient
23+
.get()
24+
.uri(uriBuilder -> uriBuilder
25+
.path("/chat/mistral")
26+
.queryParam("message", "What is the capital of Italy?")
27+
.build())
28+
.exchange()
29+
.expectStatus().isOk()
30+
.expectBody(String.class).value(result -> {
31+
assertThat(result).containsIgnoringCase("Rome");
32+
});
33+
}
34+
35+
@Test
36+
@EnabledIfEnvironmentVariable(named = "SPRING_AI_OPENAI_API_KEY", matches = ".*")
37+
void chatOpenAI() {
38+
webTestClient
39+
.get()
40+
.uri(uriBuilder -> uriBuilder
41+
.path("/chat/openai")
42+
.queryParam("message", "What is the capital of Italy?")
43+
.build())
44+
.exchange()
45+
.expectStatus().isOk()
46+
.expectBody(String.class).value(result -> {
47+
assertThat(result).containsIgnoringCase("Rome");
48+
});
49+
}
50+
51+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
package com.thomasvitale.ai.spring;
2+
3+
import org.springframework.boot.SpringApplication;
4+
import org.springframework.boot.test.context.TestConfiguration;
5+
6+
@TestConfiguration(proxyBeanMethods = false)
7+
public class TestChatModelsMultipleProvidersApplication {
8+
9+
public static void main(String[] args) {
10+
SpringApplication.from(ChatModelsMultipleProvidersApplication::main).with(TestChatModelsMultipleProvidersApplication.class).run(args);
11+
}
12+
13+
}

Diff for: README.md

+7-5
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,13 @@ Samples showing how to build Java applications powered by Generative AI and LLMs
2424

2525
### 1. Chat Completion Models
2626

27-
| Project | Description |
28-
|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
29-
| [chat-models-mistral-ai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-mistral-ai) | Text generation with LLMs via Mistral AI. |
30-
| [chat-models-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-ollama) | Text generation with LLMs via Ollama. |
31-
| [chat-models-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-openai) | Text generation with LLMs via OpenAI. |
27+
| Project | Description |
28+
|---------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------|
29+
| [chat-models-mistral-ai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-mistral-ai) | Text generation with LLMs via Mistral AI. |
30+
| [chat-models-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-ollama) | Text generation with LLMs via Ollama. |
31+
| [chat-models-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-openai) | Text generation with LLMs via OpenAI. |
32+
| [chat-models-multiple-providers](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-multiple-providers) | Text generation with LLMs via multiple providers. |
33+
3234

3335
### 2. Prompts, Templates and Multimodality
3436

Diff for: settings.gradle

+1
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ include '00-use-cases:structured-data-extraction'
1111
include '00-use-cases:text-classification'
1212

1313
include '01-chat-models:chat-models-mistral-ai'
14+
include '01-chat-models:chat-models-multiple-providers'
1415
include '01-chat-models:chat-models-ollama'
1516
include '01-chat-models:chat-models-openai'
1617

0 commit comments

Comments
 (0)