Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit f3d71f8

Browse files
authoredJan 20, 2024
Modernized LlamaIndex integration (#1613)
Updated LlamaIndex example
1 parent b7127c2 commit f3d71f8

File tree

2 files changed

+24
-36
lines changed

2 files changed

+24
-36
lines changed
 

‎examples/llamaindex/README.md

Lines changed: 12 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,27 @@
11
# LocalAI Demonstration with Embeddings
22

3-
This demonstration shows you how to use embeddings with existing data in LocalAI. We are using the `llama_index` library to facilitate the embedding and querying processes. The `Weaviate` client is used as the embedding source.
4-
5-
## Prerequisites
6-
7-
Before proceeding, make sure you have the following installed:
8-
- Weaviate client
9-
- LocalAI and its dependencies
10-
- llama_index and its dependencies
3+
This demonstration shows you how to use embeddings with existing data in LocalAI.
4+
We are using the `llama-index` library to facilitate the embedding and querying processes.
5+
The `Weaviate` client is used as the embedding source.
116

127
## Getting Started
138

14-
1. Clone this repository:
15-
16-
2. Navigate to the project directory:
9+
1. Clone this repository and navigate to this directory
1710

18-
3. Run the example:
11+
```bash
12+
git clone git@github.com:mudler/LocalAI.git
13+
cd LocalAI/examples/llamaindex
14+
```
1915

20-
`python main.py`
16+
2. pip install LlamaIndex and Weviate's client: `pip install llama-index>=0.9.9 weviate-client`
17+
3. Run the example: `python main.py`
2118
22-
```
19+
```none
2320
Downloading (…)lve/main/config.json: 100%|███████████████████████████| 684/684 [00:00<00:00, 6.01MB/s]
2421
Downloading model.safetensors: 100%|███████████████████████████████| 133M/133M [00:03<00:00, 39.5MB/s]
2522
Downloading (…)okenizer_config.json: 100%|███████████████████████████| 366/366 [00:00<00:00, 2.79MB/s]
2623
Downloading (…)solve/main/vocab.txt: 100%|█████████████████████████| 232k/232k [00:00<00:00, 6.00MB/s]
2724
Downloading (…)/main/tokenizer.json: 100%|█████████████████████████| 711k/711k [00:00<00:00, 18.8MB/s]
2825
Downloading (…)cial_tokens_map.json: 100%|███████████████████████████| 125/125 [00:00<00:00, 1.18MB/s]
2926
LocalAI is a community-driven project that aims to make AI accessible to everyone. It was created by Ettore Di Giacinto and is focused on providing various AI-related features such as text generation with GPTs, text to audio, audio to text, image generation, and more. The project is constantly growing and evolving, with a roadmap for future improvements. Anyone is welcome to contribute, provide feedback, and submit pull requests to help make LocalAI better.
30-
```
27+
```

‎examples/llamaindex/main.py

Lines changed: 12 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,29 @@
1-
import os
2-
31
import weaviate
4-
5-
from llama_index import ServiceContext, VectorStoreIndex, StorageContext
6-
from llama_index.llms import LocalAI
2+
from llama_index import ServiceContext, VectorStoreIndex
3+
from llama_index.llms import LOCALAI_DEFAULTS, OpenAILike
74
from llama_index.vector_stores import WeaviateVectorStore
8-
from llama_index.storage.storage_context import StorageContext
9-
10-
# Weaviate client setup
11-
client = weaviate.Client("http://weviate.default")
125

136
# Weaviate vector store setup
14-
vector_store = WeaviateVectorStore(weaviate_client=client, index_name="AIChroma")
15-
16-
# Storage context setup
17-
storage_context = StorageContext.from_defaults(vector_store=vector_store)
7+
vector_store = WeaviateVectorStore(
8+
weaviate_client=weaviate.Client("http://weviate.default"), index_name="AIChroma"
9+
)
1810

19-
# LocalAI setup
20-
llm = LocalAI(temperature=0, model_name="gpt-3.5-turbo", api_base="http://local-ai.default", api_key="stub")
21-
llm.globally_use_chat_completions = True;
11+
# LLM setup, served via LocalAI
12+
llm = OpenAILike(temperature=0, model="gpt-3.5-turbo", **LOCALAI_DEFAULTS)
2213

2314
# Service context setup
2415
service_context = ServiceContext.from_defaults(llm=llm, embed_model="local")
2516

2617
# Load index from stored vectors
2718
index = VectorStoreIndex.from_vector_store(
28-
vector_store,
29-
storage_context=storage_context,
30-
service_context=service_context
19+
vector_store, service_context=service_context
3120
)
3221

3322
# Query engine setup
34-
query_engine = index.as_query_engine(similarity_top_k=1, vector_store_query_mode="hybrid")
23+
query_engine = index.as_query_engine(
24+
similarity_top_k=1, vector_store_query_mode="hybrid"
25+
)
3526

3627
# Query example
3728
response = query_engine.query("What is LocalAI?")
38-
print(response)
29+
print(response)

0 commit comments

Comments
 (0)
Please sign in to comment.