Skip to content

Commit f915427

Browse files
authored
update links to docs (#329)
1 parent 9ce8a54 commit f915427

12 files changed

+18093
-18074
lines changed

tutorials/27_First_RAG_Pipeline.ipynb

+13-13
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"\n",
1111
"- **Level**: Beginner\n",
1212
"- **Time to complete**: 10 minutes\n",
13-
"- **Components Used**: [`InMemoryDocumentStore`](https://docs.haystack.deepset.ai/v2.0/docs/inmemorydocumentstore), [`SentenceTransformersDocumentEmbedder`](https://docs.haystack.deepset.ai/v2.0/docs/sentencetransformersdocumentembedder), [`SentenceTransformersTextEmbedder`](https://docs.haystack.deepset.ai/v2.0/docs/sentencetransformerstextembedder), [`InMemoryEmbeddingRetriever`](https://docs.haystack.deepset.ai/v2.0/docs/inmemoryembeddingretriever), [`PromptBuilder`](https://docs.haystack.deepset.ai/v2.0/docs/promptbuilder), [`OpenAIGenerator`](https://docs.haystack.deepset.ai/v2.0/docs/openaigenerator)\n",
13+
"- **Components Used**: [`InMemoryDocumentStore`](https://docs.haystack.deepset.ai/docs/inmemorydocumentstore), [`SentenceTransformersDocumentEmbedder`](https://docs.haystack.deepset.ai/docs/sentencetransformersdocumentembedder), [`SentenceTransformersTextEmbedder`](https://docs.haystack.deepset.ai/docs/sentencetransformerstextembedder), [`InMemoryEmbeddingRetriever`](https://docs.haystack.deepset.ai/docs/inmemoryembeddingretriever), [`PromptBuilder`](https://docs.haystack.deepset.ai/docs/promptbuilder), [`OpenAIGenerator`](https://docs.haystack.deepset.ai/docs/openaigenerator)\n",
1414
"- **Prerequisites**: You must have an [OpenAI API Key](https://platform.openai.com/api-keys).\n",
1515
"- **Goal**: After completing this tutorial, you'll have learned the new prompt syntax and how to use PromptBuilder and OpenAIGenerator to build a generative question-answering pipeline with retrieval-augmentation.\n",
1616
"\n",
@@ -25,7 +25,7 @@
2525
"source": [
2626
"## Overview\n",
2727
"\n",
28-
"This tutorial shows you how to create a generative question-answering pipeline using the retrieval-augmentation ([RAG](https://www.deepset.ai/blog/llms-retrieval-augmentation)) approach with Haystack 2.0. The process involves four main components: [SentenceTransformersTextEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/sentencetransformerstextembedder) for creating an embedding for the user query, [InMemoryBM25Retriever](https://docs.haystack.deepset.ai/v2.0/docs/inmemorybm25retriever) for fetching relevant documents, [PromptBuilder](https://docs.haystack.deepset.ai/v2.0/docs/promptbuilder) for creating a template prompt, and [OpenAIGenerator](https://docs.haystack.deepset.ai/v2.0/docs/openaigenerator) for generating responses.\n",
28+
"This tutorial shows you how to create a generative question-answering pipeline using the retrieval-augmentation ([RAG](https://www.deepset.ai/blog/llms-retrieval-augmentation)) approach with Haystack 2.0. The process involves four main components: [SentenceTransformersTextEmbedder](https://docs.haystack.deepset.ai/docs/sentencetransformerstextembedder) for creating an embedding for the user query, [InMemoryBM25Retriever](https://docs.haystack.deepset.ai/docs/inmemorybm25retriever) for fetching relevant documents, [PromptBuilder](https://docs.haystack.deepset.ai/docs/promptbuilder) for creating a template prompt, and [OpenAIGenerator](https://docs.haystack.deepset.ai/docs/openaigenerator) for generating responses.\n",
2929
"\n",
3030
"For this tutorial, you'll use the Wikipedia pages of [Seven Wonders of the Ancient World](https://en.wikipedia.org/wiki/Wonders_of_the_World) as Documents, but you can replace them with any text you want.\n"
3131
]
@@ -38,8 +38,8 @@
3838
"source": [
3939
"## Preparing the Colab Environment\n",
4040
"\n",
41-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/v2.0/docs/enabling-gpu-acceleration)\n",
42-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/v2.0/docs/logging)"
41+
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
42+
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)"
4343
]
4444
},
4545
{
@@ -183,7 +183,7 @@
183183
"source": [
184184
"### Enabling Telemetry\n",
185185
"\n",
186-
"Knowing you're using this tutorial helps us decide where to invest our efforts to build a better product but you can always opt out by commenting the following line. See [Telemetry](https://docs.haystack.deepset.ai/v2.0/docs/enabling-telemetry) for more details."
186+
"Knowing you're using this tutorial helps us decide where to invest our efforts to build a better product but you can always opt out by commenting the following line. See [Telemetry](https://docs.haystack.deepset.ai/docs/enabling-telemetry) for more details."
187187
]
188188
},
189189
{
@@ -301,9 +301,9 @@
301301
"source": [
302302
"### Initalize a Document Embedder\n",
303303
"\n",
304-
"To store your data in the DocumentStore with embeddings, initialize a [SentenceTransformersDocumentEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/sentencetransformersdocumentembedder) with the model name and call `warm_up()` to download the embedding model.\n",
304+
"To store your data in the DocumentStore with embeddings, initialize a [SentenceTransformersDocumentEmbedder](https://docs.haystack.deepset.ai/docs/sentencetransformersdocumentembedder) with the model name and call `warm_up()` to download the embedding model.\n",
305305
"\n",
306-
"> If you'd like, you can use a different [Embedder](https://docs.haystack.deepset.ai/v2.0/docs/embedders) for your documents."
306+
"> If you'd like, you can use a different [Embedder](https://docs.haystack.deepset.ai/docs/embedders) for your documents."
307307
]
308308
},
309309
{
@@ -407,7 +407,7 @@
407407
"source": [
408408
"## Building the RAG Pipeline\n",
409409
"\n",
410-
"The next step is to build a [Pipeline](https://docs.haystack.deepset.ai/v2.0/docs/pipelines) to generate answers for the user query following the RAG approach. To create the pipeline, you first need to initialize each component, add them to your pipeline, and connect them."
410+
"The next step is to build a [Pipeline](https://docs.haystack.deepset.ai/docs/pipelines) to generate answers for the user query following the RAG approach. To create the pipeline, you first need to initialize each component, add them to your pipeline, and connect them."
411411
]
412412
},
413413
{
@@ -444,7 +444,7 @@
444444
"source": [
445445
"### Initialize the Retriever\n",
446446
"\n",
447-
"Initialize a [InMemoryEmbeddingRetriever](https://docs.haystack.deepset.ai/v2.0/docs/inmemoryembeddingretriever) and make it use the InMemoryDocumentStore you initialized earlier in this tutorial. This Retriever will get the relevant documents to the query."
447+
"Initialize a [InMemoryEmbeddingRetriever](https://docs.haystack.deepset.ai/docs/inmemoryembeddingretriever) and make it use the InMemoryDocumentStore you initialized earlier in this tutorial. This Retriever will get the relevant documents to the query."
448448
]
449449
},
450450
{
@@ -470,7 +470,7 @@
470470
"\n",
471471
"Create a custom prompt for a generative question answering task using the RAG approach. The prompt should take in two parameters: `documents`, which are retrieved from a document store, and a `question` from the user. Use the Jinja2 looping syntax to combine the content of the retrieved documents in the prompt.\n",
472472
"\n",
473-
"Next, initialize a [PromptBuilder](https://docs.haystack.deepset.ai/v2.0/docs/promptbuilder) instance with your prompt template. The PromptBuilder, when given the necessary values, will automatically fill in the variable values and generate a complete prompt. This approach allows for a more tailored and effective question-answering experience."
473+
"Next, initialize a [PromptBuilder](https://docs.haystack.deepset.ai/docs/promptbuilder) instance with your prompt template. The PromptBuilder, when given the necessary values, will automatically fill in the variable values and generate a complete prompt. This approach allows for a more tailored and effective question-answering experience."
474474
]
475475
},
476476
{
@@ -507,7 +507,7 @@
507507
"### Initialize a Generator\n",
508508
"\n",
509509
"\n",
510-
"Generators are the components that interact with large language models (LLMs). Now, set `OPENAI_API_KEY` environment variable and initialize a [OpenAIGenerator](https://docs.haystack.deepset.ai/v2.0/docs/OpenAIGenerator) that can communicate with OpenAI GPT models. As you initialize, provide a model name:"
510+
"Generators are the components that interact with large language models (LLMs). Now, set `OPENAI_API_KEY` environment variable and initialize a [OpenAIGenerator](https://docs.haystack.deepset.ai/docs/OpenAIGenerator) that can communicate with OpenAI GPT models. As you initialize, provide a model name:"
511511
]
512512
},
513513
{
@@ -545,7 +545,7 @@
545545
"id": "nenbo2SvycHd"
546546
},
547547
"source": [
548-
"> You can replace `OpenAIGenerator` in your pipeline with another `Generator`. Check out the full list of generators [here](https://docs.haystack.deepset.ai/v2.0/docs/generators)."
548+
"> You can replace `OpenAIGenerator` in your pipeline with another `Generator`. Check out the full list of generators [here](https://docs.haystack.deepset.ai/docs/generators)."
549549
]
550550
},
551551
{
@@ -558,7 +558,7 @@
558558
"\n",
559559
"To build a pipeline, add all components to your pipeline and connect them. Create connections from `text_embedder`'s \"embedding\" output to \"query_embedding\" input of `retriever`, from `retriever` to `prompt_builder` and from `prompt_builder` to `llm`. Explicitly connect the output of `retriever` with \"documents\" input of the `prompt_builder` to make the connection obvious as `prompt_builder` has two inputs (\"documents\" and \"question\").\n",
560560
"\n",
561-
"For more information on pipelines and creating connections, refer to [Creating Pipelines](https://docs.haystack.deepset.ai/v2.0/docs/creating-pipelines) documentation."
561+
"For more information on pipelines and creating connections, refer to [Creating Pipelines](https://docs.haystack.deepset.ai/docs/creating-pipelines) documentation."
562562
]
563563
},
564564
{

tutorials/28_Structured_Output_With_Loop.ipynb

+5-5
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"> This tutorial uses Haystack 2.0. To learn more, read the [Haystack 2.0 announcement](https://haystack.deepset.ai/blog/haystack-2-release) or visit the [Haystack 2.0 Documentation](https://docs.haystack.deepset.ai/docs/intro)..\n",
1818
"\n",
1919
"## Overview\n",
20-
"This tutorial demonstrates how to use Haystack 2.0's advanced [looping pipelines](https://docs.haystack.deepset.ai/v2.0/docs/pipelines#loops) with LLMs for more dynamic and flexible data processing. You'll learn how to extract structured data from unstructured data using an LLM, and to validate the generated output against a predefined schema.\n",
20+
"This tutorial demonstrates how to use Haystack 2.0's advanced [looping pipelines](https://docs.haystack.deepset.ai/docs/pipelines#loops) with LLMs for more dynamic and flexible data processing. You'll learn how to extract structured data from unstructured data using an LLM, and to validate the generated output against a predefined schema.\n",
2121
"\n",
2222
"This tutorial uses `gpt-3.5-turbo` to change unstructured passages into JSON outputs that follow the [Pydantic](https://github.com/pydantic/pydantic) schema. It uses a custom OutputValidator component to validate the JSON and loop back to make corrections, if necessary."
2323
]
@@ -173,7 +173,7 @@
173173
"\n",
174174
"`OutputValidator` is a custom component that validates if the JSON object the LLM generates complies with the provided [Pydantic model](https://docs.pydantic.dev/1.10/usage/models/). If it doesn't, OutputValidator returns an error message along with the incorrect JSON object to get it fixed in the next loop.\n",
175175
"\n",
176-
"For more details about custom components, see [Creating Custom Components](https://docs.haystack.deepset.ai/v2.0/docs/custom-components)."
176+
"For more details about custom components, see [Creating Custom Components](https://docs.haystack.deepset.ai/docs/custom-components)."
177177
]
178178
},
179179
{
@@ -257,7 +257,7 @@
257257
"\n",
258258
"Write instructions for the LLM for converting a passage into a JSON format. Ensure the instructions explain how to identify and correct errors if the JSON doesn't match the required schema. Once you create the prompt, initialize PromptBuilder to use it. \n",
259259
"\n",
260-
"For information about Jinja2 template and PromptBuilder, see [PromptBuilder](https://docs.haystack.deepset.ai/v2.0/docs/promptbuilder)."
260+
"For information about Jinja2 template and PromptBuilder, see [PromptBuilder](https://docs.haystack.deepset.ai/docs/promptbuilder)."
261261
]
262262
},
263263
{
@@ -292,7 +292,7 @@
292292
"source": [
293293
"## Initalizing the Generator\n",
294294
"\n",
295-
"[OpenAIGenerator](https://docs.haystack.deepset.ai/v2.0/docs/openaigenerator) generates\n",
295+
"[OpenAIGenerator](https://docs.haystack.deepset.ai/docs/openaigenerator) generates\n",
296296
"text using OpenAI's `gpt-3.5-turbo` model by default. Set the `OPENAI_API_KEY` variable and provide a model name to the Generator."
297297
]
298298
},
@@ -358,7 +358,7 @@
358358
"source": [
359359
"### Visualize the Pipeline\n",
360360
"\n",
361-
"Draw the pipeline with the [`draw()`](https://docs.haystack.deepset.ai/v2.0/docs/drawing-pipeline-graphs) method to confirm the connections are correct. You can find the diagram in the Files section of this Colab."
361+
"Draw the pipeline with the [`draw()`](https://docs.haystack.deepset.ai/docs/drawing-pipeline-graphs) method to confirm the connections are correct. You can find the diagram in the Files section of this Colab."
362362
]
363363
},
364364
{

tutorials/29_Serializing_Pipelines.ipynb

+5-5
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"\n",
1111
"- **Level**: Beginner\n",
1212
"- **Time to complete**: 10 minutes\n",
13-
"- **Components Used**: [`HuggingFaceLocalGenerator`](https://docs.haystack.deepset.ai/v2.0/docs/huggingfacelocalgenerator), [`PromptBuilder`](https://docs.haystack.deepset.ai/v2.0/docs/promptbuilder)\n",
13+
"- **Components Used**: [`HuggingFaceLocalGenerator`](https://docs.haystack.deepset.ai/docs/huggingfacelocalgenerator), [`PromptBuilder`](https://docs.haystack.deepset.ai/docs/promptbuilder)\n",
1414
"- **Prerequisites**: None\n",
1515
"- **Goal**: After completing this tutorial, you'll understand how to serialize and deserialize between YAML and Python code.\n",
1616
"\n",
@@ -25,7 +25,7 @@
2525
"source": [
2626
"## Overview\n",
2727
"\n",
28-
"**📚 Useful Documentation:** [Serialization](https://docs.haystack.deepset.ai/v2.0/docs/serialization)\n",
28+
"**📚 Useful Documentation:** [Serialization](https://docs.haystack.deepset.ai/docs/serialization)\n",
2929
"\n",
3030
"Serialization means converting a pipeline to a format that you can save on your disk and load later. It's especially useful because a serialized pipeline can be saved on disk or a database, get sent over a network and more. \n",
3131
"\n",
@@ -40,8 +40,8 @@
4040
"source": [
4141
"## Preparing the Colab Environment\n",
4242
"\n",
43-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/v2.0/docs/enabling-gpu-acceleration)\n",
44-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/v2.0/docs/logging)"
43+
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
44+
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)"
4545
]
4646
},
4747
{
@@ -121,7 +121,7 @@
121121
"source": [
122122
"### Enabling Telemetry\n",
123123
"\n",
124-
"Knowing you're using this tutorial helps us decide where to invest our efforts to build a better product but you can always opt out by commenting the following line. See [Telemetry](https://docs.haystack.deepset.ai/v2.0/docs/enabling-telemetry) for more details."
124+
"Knowing you're using this tutorial helps us decide where to invest our efforts to build a better product but you can always opt out by commenting the following line. See [Telemetry](https://docs.haystack.deepset.ai/docs/enabling-telemetry) for more details."
125125
]
126126
},
127127
{

0 commit comments

Comments
 (0)