Skip to content

Commit 05187cf

Browse files
authored
replace gpt-3.5-turbo with gpt-4o-mini (#351)
1 parent 67ed9d1 commit 05187cf

5 files changed

+20
-20
lines changed

tutorials/27_First_RAG_Pipeline.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -536,7 +536,7 @@
536536
"\n",
537537
"if \"OPENAI_API_KEY\" not in os.environ:\n",
538538
" os.environ[\"OPENAI_API_KEY\"] = getpass(\"Enter OpenAI API key:\")\n",
539-
"generator = OpenAIGenerator(model=\"gpt-3.5-turbo\")"
539+
"generator = OpenAIGenerator(model=\"gpt-4o-mini\")"
540540
]
541541
},
542542
{

tutorials/28_Structured_Output_With_Loop.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"\n",
1111
"- **Level**: Intermediate\n",
1212
"- **Time to complete**: 15 minutes\n",
13-
"- **Prerequisites**: You must have an API key from an active OpenAI account as this tutorial is using the gpt-3.5-turbo model by OpenAI.\n",
13+
"- **Prerequisites**: You must have an API key from an active OpenAI account as this tutorial is using the gpt-4o-mini model by OpenAI.\n",
1414
"- **Components Used**: `PromptBuilder`, `OpenAIGenerator`, `OutputValidator` (Custom component)\n",
1515
"- **Goal**: After completing this tutorial, you will have built a system that extracts unstructured data, puts it in a JSON schema, and automatically corrects errors in the JSON output from a large language model (LLM) to make sure it follows the specified structure.\n",
1616
"\n",
@@ -19,7 +19,7 @@
1919
"## Overview\n",
2020
"This tutorial demonstrates how to use Haystack 2.0's advanced [looping pipelines](https://docs.haystack.deepset.ai/docs/pipelines#loops) with LLMs for more dynamic and flexible data processing. You'll learn how to extract structured data from unstructured data using an LLM, and to validate the generated output against a predefined schema.\n",
2121
"\n",
22-
"This tutorial uses `gpt-3.5-turbo` to change unstructured passages into JSON outputs that follow the [Pydantic](https://github.com/pydantic/pydantic) schema. It uses a custom OutputValidator component to validate the JSON and loop back to make corrections, if necessary."
22+
"This tutorial uses `gpt-4o-mini` to change unstructured passages into JSON outputs that follow the [Pydantic](https://github.com/pydantic/pydantic) schema. It uses a custom OutputValidator component to validate the JSON and loop back to make corrections, if necessary."
2323
]
2424
},
2525
{
@@ -293,7 +293,7 @@
293293
"## Initalizing the Generator\n",
294294
"\n",
295295
"[OpenAIGenerator](https://docs.haystack.deepset.ai/docs/openaigenerator) generates\n",
296-
"text using OpenAI's `gpt-3.5-turbo` model by default. Set the `OPENAI_API_KEY` variable and provide a model name to the Generator."
296+
"text using OpenAI's `gpt-4o-mini` model by default. Set the `OPENAI_API_KEY` variable and provide a model name to the Generator."
297297
]
298298
},
299299
{

tutorials/35_Evaluating_RAG_Pipelines.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"- **Level**: Intermediate\n",
1212
"- **Time to complete**: 15 minutes\n",
1313
"- **Components Used**: `InMemoryDocumentStore`, `InMemoryEmbeddingRetriever`, `PromptBuilder`, `OpenAIGenerator`, `DocumentMRREvaluator`, `FaithfulnessEvaluator`, `SASEvaluator`\n",
14-
"- **Prerequisites**: You must have an API key from an active OpenAI account as this tutorial is using the gpt-3.5-turbo model by OpenAI: https://platform.openai.com/api-keys\n",
14+
"- **Prerequisites**: You must have an API key from an active OpenAI account as this tutorial is using the gpt-4o-mini model by OpenAI: https://platform.openai.com/api-keys\n",
1515
"- **Goal**: After completing this tutorial, you'll have learned how to evaluate your RAG pipelines both with model-based, and statistical metrics available in the Haystack evaluation offering. You'll also see which other evaluation frameworks are integrated with Haystack.\n",
1616
"\n",
1717
"> This tutorial uses Haystack 2.0. To learn more, read the [Haystack 2.0 announcement](https://haystack.deepset.ai/blog/haystack-2-release) or visit the [Haystack 2.0 Documentation](https://docs.haystack.deepset.ai/docs/intro)."
@@ -862,7 +862,7 @@
862862
")\n",
863863
"rag_pipeline.add_component(\"retriever\", InMemoryEmbeddingRetriever(document_store, top_k=3))\n",
864864
"rag_pipeline.add_component(\"prompt_builder\", PromptBuilder(template=template))\n",
865-
"rag_pipeline.add_component(\"generator\", OpenAIGenerator(model=\"gpt-3.5-turbo\"))\n",
865+
"rag_pipeline.add_component(\"generator\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n",
866866
"rag_pipeline.add_component(\"answer_builder\", AnswerBuilder())\n",
867867
"\n",
868868
"rag_pipeline.connect(\"query_embedder\", \"retriever.query_embedding\")\n",

tutorials/36_Building_Fallbacks_with_Conditional_Routing.ipynb

+6-6
Original file line numberDiff line numberDiff line change
@@ -178,9 +178,9 @@
178178
"\n",
179179
"First, define a prompt instructing the LLM to respond with the text `\"no_answer\"` if the provided documents do not offer enough context to answer the query. Next, initialize a [PromptBuilder](https://docs.haystack.deepset.ai/docs/promptbuilder) with that prompt. It's crucial that the LLM replies with `\"no_answer\"` as you will use this keyword to indicate that the query should be directed to the fallback web search route.\n",
180180
"\n",
181-
"As the LLM, you will use an [OpenAIGenerator](https://docs.haystack.deepset.ai/docs/openaigenerator) with the `gpt-3.5-turbo` model.\n",
181+
"As the LLM, you will use an [OpenAIGenerator](https://docs.haystack.deepset.ai/docs/openaigenerator) with the `gpt-4o-mini` model.\n",
182182
"\n",
183-
"> The provided prompt works effectively with the `gpt-3.5-turbo` model. If you prefer to use a different [Generator](https://docs.haystack.deepset.ai/docs/generators), you may need to update the prompt to provide clear instructions to your model."
183+
"> The provided prompt works effectively with the `gpt-4o-mini` model. If you prefer to use a different [Generator](https://docs.haystack.deepset.ai/docs/generators), you may need to update the prompt to provide clear instructions to your model."
184184
]
185185
},
186186
{
@@ -205,7 +205,7 @@
205205
"\"\"\"\n",
206206
"\n",
207207
"prompt_builder = PromptBuilder(template=prompt_template)\n",
208-
"llm = OpenAIGenerator(model=\"gpt-3.5-turbo\")"
208+
"llm = OpenAIGenerator(model=\"gpt-4o-mini\")"
209209
]
210210
},
211211
{
@@ -246,7 +246,7 @@
246246
"\n",
247247
"websearch = SerperDevWebSearch()\n",
248248
"prompt_builder_for_websearch = PromptBuilder(template=prompt_for_websearch)\n",
249-
"llm_for_websearch = OpenAIGenerator(model=\"gpt-3.5-turbo\")"
249+
"llm_for_websearch = OpenAIGenerator(model=\"gpt-4o-mini\")"
250250
]
251251
},
252252
{
@@ -472,7 +472,7 @@
472472
{
473473
"data": {
474474
"text/plain": [
475-
"{'llm': {'meta': [{'model': 'gpt-3.5-turbo-0613',\n",
475+
"{'llm': {'meta': [{'model': 'gpt-4o-mini-2024-07-18',\n",
476476
" 'index': 0,\n",
477477
" 'finish_reason': 'stop',\n",
478478
" 'usage': {'completion_tokens': 2,\n",
@@ -488,7 +488,7 @@
488488
" 'https://www.quora.com/How-many-people-live-in-Munich',\n",
489489
" 'https://earth.esa.int/web/earth-watching/image-of-the-week/content/-/article/munich-germany/']},\n",
490490
" 'llm_for_websearch': {'replies': ['According to the documents retrieved from the web, the population of Munich is approximately 1.47 million as of 2019. However, the most recent estimates suggest that the population has grown to about 1.58 million as of May 31, 2022. Additionally, the current estimated population of Munich is around 1.46 million, with the urban area being much larger at 2.65 million.'],\n",
491-
" 'meta': [{'model': 'gpt-3.5-turbo-0613',\n",
491+
" 'meta': [{'model': 'gpt-4o-mini-2024-07-18',\n",
492492
" 'index': 0,\n",
493493
" 'finish_reason': 'stop',\n",
494494
" 'usage': {'completion_tokens': 85,\n",

tutorials/40_Building_Chat_Application_with_Function_Calling.ipynb

+8-8
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@
138138
{
139139
"data": {
140140
"text/plain": [
141-
"{'replies': [ChatMessage(content='Natürliche Sprachverarbeitung (NLP) ist ein Bereich der künstlichen Intelligenz, der sich mit der Wechselwirkung zwischen Menschensprache und Maschinen befasst. Es zielt darauf ab, Computern das Verstehen, Interpretieren und Generieren menschlicher Sprache zu ermöglichen.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'gpt-3.5-turbo-0125', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 74, 'prompt_tokens': 34, 'total_tokens': 108}})]}"
141+
"{'replies': [ChatMessage(content='Natürliche Sprachverarbeitung (NLP) ist ein Bereich der künstlichen Intelligenz, der sich mit der Wechselwirkung zwischen Menschensprache und Maschinen befasst. Es zielt darauf ab, Computern das Verstehen, Interpretieren und Generieren menschlicher Sprache zu ermöglichen.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 74, 'prompt_tokens': 34, 'total_tokens': 108}})]}"
142142
]
143143
},
144144
"execution_count": 4,
@@ -155,7 +155,7 @@
155155
" ChatMessage.from_user(\"What's Natural Language Processing? Be brief.\"),\n",
156156
"]\n",
157157
"\n",
158-
"chat_generator = OpenAIChatGenerator(model=\"gpt-3.5-turbo\")\n",
158+
"chat_generator = OpenAIChatGenerator(model=\"gpt-4o-mini\")\n",
159159
"chat_generator.run(messages=messages)"
160160
]
161161
},
@@ -194,7 +194,7 @@
194194
"from haystack.components.generators.chat import OpenAIChatGenerator\n",
195195
"from haystack.components.generators.utils import print_streaming_chunk\n",
196196
"\n",
197-
"chat_generator = OpenAIChatGenerator(model=\"gpt-3.5-turbo\", streaming_callback=print_streaming_chunk)\n",
197+
"chat_generator = OpenAIChatGenerator(model=\"gpt-4o-mini\", streaming_callback=print_streaming_chunk)\n",
198198
"response = chat_generator.run(messages=messages)"
199199
]
200200
},
@@ -662,7 +662,7 @@
662662
"rag_pipe.add_component(\"embedder\", SentenceTransformersTextEmbedder(model=\"sentence-transformers/all-MiniLM-L6-v2\"))\n",
663663
"rag_pipe.add_component(\"retriever\", InMemoryEmbeddingRetriever(document_store=document_store))\n",
664664
"rag_pipe.add_component(\"prompt_builder\", PromptBuilder(template=template))\n",
665-
"rag_pipe.add_component(\"llm\", OpenAIGenerator(model=\"gpt-3.5-turbo\"))\n",
665+
"rag_pipe.add_component(\"llm\", OpenAIGenerator(model=\"gpt-4o-mini\"))\n",
666666
"\n",
667667
"rag_pipe.connect(\"embedder.embedding\", \"retriever.query_embedding\")\n",
668668
"rag_pipe.connect(\"retriever\", \"prompt_builder.documents\")\n",
@@ -722,7 +722,7 @@
722722
"data": {
723723
"text/plain": [
724724
"{'llm': {'replies': ['Berlin'],\n",
725-
" 'meta': [{'model': 'gpt-3.5-turbo-0125',\n",
725+
" 'meta': [{'model': 'gpt-4o-mini-2024-07-18',\n",
726726
" 'index': 0,\n",
727727
" 'finish_reason': 'stop',\n",
728728
" 'usage': {'completion_tokens': 1,\n",
@@ -886,7 +886,7 @@
886886
" ChatMessage.from_user(\"Can you tell me where Mark lives?\"),\n",
887887
"]\n",
888888
"\n",
889-
"chat_generator = OpenAIChatGenerator(model=\"gpt-3.5-turbo\", streaming_callback=print_streaming_chunk)\n",
889+
"chat_generator = OpenAIChatGenerator(model=\"gpt-4o-mini\", streaming_callback=print_streaming_chunk)\n",
890890
"response = chat_generator.run(messages=messages, generation_kwargs={\"tools\": tools})"
891891
]
892892
},
@@ -908,7 +908,7 @@
908908
" ChatMessage(\n",
909909
" content='[{\"index\": 0, \"id\": \"call_3VnT0XQH0ye41g3Ip5CRz4ri\", \"function\": {\"arguments\": \"{\\\\\"query\\\\\":\\\\\"Where does Mark live?\\\\\"}\", \"name\": \"rag_pipeline_func\"}, \"type\": \"function\"}]', role=<ChatRole.ASSISTANT: 'assistant'>, \n",
910910
" name=None, \n",
911-
" meta={'model': 'gpt-3.5-turbo-0125', 'index': 0, 'finish_reason': 'tool_calls', 'usage': {}}\n",
911+
" meta={'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'usage': {}}\n",
912912
" )\n",
913913
" ]\n",
914914
"}\n",
@@ -1098,7 +1098,7 @@
10981098
"from haystack.dataclasses import ChatMessage\n",
10991099
"from haystack.components.generators.chat import OpenAIChatGenerator\n",
11001100
"\n",
1101-
"chat_generator = OpenAIChatGenerator(model=\"gpt-3.5-turbo\")\n",
1101+
"chat_generator = OpenAIChatGenerator(model=\"gpt-4o-mini\")\n",
11021102
"response = None\n",
11031103
"messages = [\n",
11041104
" ChatMessage.from_system(\n",

0 commit comments

Comments
 (0)