Replies: 3 comments 1 reply
-
I would like to know too. I haven't had much luck figuring this out, partly because the state of the art is constantly changing. I would appreciate links to some reliable resources. If it's clear what an LLM frontend like gptel needs to do to support this, I can add the interface for it. Prior discussion: #176 |
Beta Was this translation helpful? Give feedback.
-
I see GPT4ALL can be used, but it does involve integration of multiple
items
https://medium.com/artificial-corner/gpt4all-is-the-local-chatgpt-for-your-documents-and-it-is-free-df1016bc335
…On Fri, Mar 8, 2024, 6:57 PM karthink ***@***.***> wrote:
I would like to know too. I haven't had much luck figuring this out,
partly because the state of the art is constantly changing. I would
appreciate links to some reliable resources.
If it's clear what an LLM frontend like gptel needs to do to support this,
I can add the interface for it.
Prior discussion: #176 <#176>
—
Reply to this email directly, view it on GitHub
<#240 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANBRMCMGJEM7RFWTX6KNUODYXJ3CVAVCNFSM6AAAAABENGPFVSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DOMRWGQ3DO>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I didn't find any tips on how to use external reference materials in OpenAI docs, but Antrhopic suggests that such context should come first and the rest of the prompt comes after. Also it's better to put it in system prompt. Don't know if these suggestions can be applied to other models (don't see why not). To further improve response and decrease cost, one can filter all documents with embedding models to provide the most relevant context for the current question. Reference materials are sent to embedding model only once and weights can be kept locally. Later only query will be processed by the embedding model to get query's weights. And filtering will take place locally. But that's overkill for this project, at least for now, I think. |
Beta Was this translation helpful? Give feedback.
-
Hi
I want an LLM to ingest a local document, after which I want to use gptel to chat with the LLM to query regarding the document.
In fact, eventually, I want to keep adding documents so that I can chat via gptel and query everything I have on my laptop.
What is the best flow for doing this, is there any blog/document/website that i can refer to.
There are so many LLM's out there, not sure which one is the right one, I prefer something open source and free.
Any advice would be of great help
Beta Was this translation helpful? Give feedback.
All reactions