Skip to content

Commit f53ecd4

Browse files
committed
[DATALAD RUNCMD] chore: run codespell throughout fixing a few new typos automagically
=== Do not change lines below === { "chain": [], "cmd": "codespell -w", "exit": 0, "extra_inputs": [], "inputs": [], "outputs": [], "pwd": "." } ^^^ Do not change lines above ^^^
1 parent ef129b9 commit f53ecd4

File tree

4 files changed

+7
-7
lines changed

4 files changed

+7
-7
lines changed

CHANGELOG.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
* prompt_style applied to all LLMs + extra LLM params. ([#1835](https://github.com/zylon-ai/private-gpt/issues/1835)) ([e21bf20](https://github.com/zylon-ai/private-gpt/commit/e21bf20c10938b24711d9f2c765997f44d7e02a9))
3838
* **recipe:** add our first recipe `Summarize` ([#2028](https://github.com/zylon-ai/private-gpt/issues/2028)) ([8119842](https://github.com/zylon-ai/private-gpt/commit/8119842ae6f1f5ecfaf42b06fa0d1ffec675def4))
3939
* **vectordb:** Milvus vector db Integration ([#1996](https://github.com/zylon-ai/private-gpt/issues/1996)) ([43cc31f](https://github.com/zylon-ai/private-gpt/commit/43cc31f74015f8d8fcbf7a8ea7d7d9ecc66cf8c9))
40-
* **vectorstore:** Add clickhouse support as vectore store ([#1883](https://github.com/zylon-ai/private-gpt/issues/1883)) ([2612928](https://github.com/zylon-ai/private-gpt/commit/26129288394c7483e6fc0496a11dc35679528cc1))
40+
* **vectorstore:** Add clickhouse support as vector store ([#1883](https://github.com/zylon-ai/private-gpt/issues/1883)) ([2612928](https://github.com/zylon-ai/private-gpt/commit/26129288394c7483e6fc0496a11dc35679528cc1))
4141

4242

4343
### Bug Fixes
@@ -70,7 +70,7 @@
7070
* **docs:** upgrade fern ([#1596](https://github.com/zylon-ai/private-gpt/issues/1596)) ([84ad16a](https://github.com/zylon-ai/private-gpt/commit/84ad16af80191597a953248ce66e963180e8ddec))
7171
* **ingest:** Created a faster ingestion mode - pipeline ([#1750](https://github.com/zylon-ai/private-gpt/issues/1750)) ([134fc54](https://github.com/zylon-ai/private-gpt/commit/134fc54d7d636be91680dc531f5cbe2c5892ac56))
7272
* **llm - embed:** Add support for Azure OpenAI ([#1698](https://github.com/zylon-ai/private-gpt/issues/1698)) ([1efac6a](https://github.com/zylon-ai/private-gpt/commit/1efac6a3fe19e4d62325e2c2915cd84ea277f04f))
73-
* **llm:** adds serveral settings for llamacpp and ollama ([#1703](https://github.com/zylon-ai/private-gpt/issues/1703)) ([02dc83e](https://github.com/zylon-ai/private-gpt/commit/02dc83e8e9f7ada181ff813f25051bbdff7b7c6b))
73+
* **llm:** adds several settings for llamacpp and ollama ([#1703](https://github.com/zylon-ai/private-gpt/issues/1703)) ([02dc83e](https://github.com/zylon-ai/private-gpt/commit/02dc83e8e9f7ada181ff813f25051bbdff7b7c6b))
7474
* **llm:** Ollama LLM-Embeddings decouple + longer keep_alive settings ([#1800](https://github.com/zylon-ai/private-gpt/issues/1800)) ([b3b0140](https://github.com/zylon-ai/private-gpt/commit/b3b0140e244e7a313bfaf4ef10eb0f7e4192710e))
7575
* **llm:** Ollama timeout setting ([#1773](https://github.com/zylon-ai/private-gpt/issues/1773)) ([6f6c785](https://github.com/zylon-ai/private-gpt/commit/6f6c785dac2bbad37d0b67fda215784298514d39))
7676
* **local:** tiktoken cache within repo for offline ([#1467](https://github.com/zylon-ai/private-gpt/issues/1467)) ([821bca3](https://github.com/zylon-ai/private-gpt/commit/821bca32e9ee7c909fd6488445ff6a04463bf91b))

private_gpt/components/llm/custom/sagemaker.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ class LineIterator:
6262
within the buffer via the 'scan_lines' function. It maintains the position of the
6363
last read position to ensure that previous bytes are not exposed again. It will
6464
also save any pending lines that doe not end with a '\n' to make sure truncations
65-
are concatinated
65+
are concatenated
6666
"""
6767

6868
def __init__(self, stream: Any) -> None:

private_gpt/settings/settings.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ class LLMSettings(BaseModel):
145145
"If `llama2` - use the llama2 prompt style from the llama_index. Based on `<s>`, `[INST]` and `<<SYS>>`.\n"
146146
"If `llama3` - use the llama3 prompt style from the llama_index."
147147
"If `tag` - use the `tag` prompt style. It should look like `<|role|>: message`. \n"
148-
"If `mistral` - use the `mistral prompt style. It shoudl look like <s>[INST] {System Prompt} [/INST]</s>[INST] { UserInstructions } [/INST]"
148+
"If `mistral` - use the `mistral prompt style. It should look like <s>[INST] {System Prompt} [/INST]</s>[INST] { UserInstructions } [/INST]"
149149
"`llama2` is the historic behaviour. `default` might work better with your custom models."
150150
),
151151
)
@@ -214,7 +214,7 @@ class EmbeddingSettings(BaseModel):
214214
"If `batch` - if multiple files, parse all the files in parallel, "
215215
"and send them in batch to the embedding model.\n"
216216
"In `pipeline` - The Embedding engine is kept as busy as possible\n"
217-
"If `parallel` - parse the files in parallel using multiple cores, and embedd them in parallel.\n"
217+
"If `parallel` - parse the files in parallel using multiple cores, and embed them in parallel.\n"
218218
"`parallel` is the fastest mode for local setup, as it parallelize IO RW in the index.\n"
219219
"For modes that leverage parallelization, you can specify the number of "
220220
"workers to use with `count_workers`.\n"

tests/settings/test_settings.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,6 @@ def test_settings_are_loaded_and_merged() -> None:
77

88

99
def test_settings_can_be_overriden(injector: MockInjector) -> None:
10-
injector.bind_settings({"server": {"env_name": "overriden"}})
10+
injector.bind_settings({"server": {"env_name": "overridden"}})
1111
mocked_settings = injector.get(Settings)
12-
assert mocked_settings.server.env_name == "overriden"
12+
assert mocked_settings.server.env_name == "overridden"

0 commit comments

Comments
 (0)