Upload your own set of documents and chat with them using a model of your choice. You can use it with the open-source DeepSeek R1 model as well, just specify the details in the settings.
This app is a modified version of StreamLit for LlamaIndex example where you can actually upload your own files (PDF, TXT, DOCX, CSVs, etc.) and chat with them. The original version is using a hardcoded folder. I also added a possibility to use open-source models here, not only GPT-4o.
- Takes user queries via Streamlit's
st.chat_inputand displays both user queries and model responses withst.chat_message - Uses LlamaIndex to load and index data and create a chat engine that will retrieve context from that data to respond to each user query
TBA
You can get your own OpenAI API key by following the following instructions:
- Go to https://platform.openai.com/account/api-keys.
- Click on the
+ Create new secret keybutton. - Next, enter an identifier name (optional) and click on the
Create secret keybutton. - Add your API key to your Environemnt variables as
OPENAI_KEY. Alternatively, you can add this API key to your.streamlit/secrets.tomlfile (rename the sample file after you add your key).
Caution
Don't commit your secrets file to your GitHub repository. The .gitignore file in this repo includes .streamlit/secrets.toml and secrets.toml.
- Clone the repo in your Terminal (or using a service like Koyeb)
git clone git@github.com:streamlit/llamaindex-chat-with-streamlit-docs.git
- Change to
llamaindex-chat-with-streamlit-docsdirectory and install dependencies:
pip install -r requirements.txt
Before that, you might want to change to a new Python environment not to mess up your current one. For instance, if you're using Conda:
conda create -n streamlit
conda activate streamlit
- Once the dependencies are installed, run the app:
streamlit run streamlit_app.py
It will be available on
http://localhost:8501/
Once the app is loaded, use the folder on the left to load your own library, Reindex it, and after the indexing is done ask a question using the chat.
Note, that if you're using OpenAI's GPT4o model (the default one), it may be quite expensive for big folders. So either reduce the number of folders or use another model.
