Skip to content

Commit c30d8ba

Browse files
authored
Update README.md
1 parent 1abf17a commit c30d8ba

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -53,10 +53,10 @@ https://github.com/user-attachments/assets/a7c383cd-3986-44cb-bd0e-0d4832b07500
5353
*Llama3.2*
5454
*Mistral*
5555
These models can be automatically downloaded, installed and used with just one click on the plugin page, models are all locally stored, ensuring not sending your data to remote LLMs.
56-
Of course, these models can be switched as your will, and smarter Open Source LLMs in the future would be accessed as soon as possible.
57-
- 100% Privacy and Safe of Your Personal Data. Besides local LLMs, the RAG modules of embeddings, vector database and rerank are all built and runned locally, There will be no data leakage and it can be used normally even on the plane when the internet can't be connected.
56+
Of course, these models can be switched as your will, and smarter Open Source LLMs in the future would be accessed as soon as possible.
57+
- 100% Privacy and Safe of Your Personal Data. Besides local LLMs, the RAG modules of embeddings, vector database and rerank are all built and runned locally, There will be no data leakage and it can be used normally even on the plane when the internet can't be connected.
5858
- Notice: Gemma 3 is the 2nd best open model only below DeepSeek-R1 benchmarked by Chatbot Arena, while much smaller than DeepSeek R1, and it is not a reasoning model, your Mac's memory should be at least 12G.
59-
QwQ-32B is as great as DeepSeek R1, while the model size is much smaller than DeepSeek. please ensure your Mac's memory at least 16G. DeepSeek-R1-Distill-Llama/Qwen is great! Highly recommended to try on your Mac, please ensure your Mac's memory at least 8G. If you want to use Gemma 3 or QwQ-32B on Windows, please use ollama or llama.cpp to deploy QwQ-32B locally first, and then connect to the local LLM server by customized LLM api.
59+
Qwen 3 and DeepSeek-R1-Distill-Llama are great! Highly recommended to try on your Mac, please ensure your Mac's memory at least 8G. If you want to use Gemma 3 or Qwen 3 on Windows, please use ollama or llama.cpp to deploy Qwen 3 locally first, and then connect to the local LLM server by customized LLM api.
6060

6161
**Seamless Zotero Integration:**
6262

0 commit comments

Comments
 (0)