Skip to content

Commit 1321ed1

Browse files
committed
Fixed tests and updated CONTRIBUTING.md.
1 parent 26a75f1 commit 1321ed1

File tree

2 files changed

+20
-25
lines changed

2 files changed

+20
-25
lines changed

CONTRIBUTING.md

Lines changed: 12 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,14 @@
1-
# How to develop on this project
1+
# Contributing to ReadmeReady
2+
3+
ReadmeReady welcomes contributions from the community.
4+
5+
# Questions and Reporting Issues
26

3-
readme_ready welcomes contributions from the community.
7+
Have a question? Have you identified a reproducible problem in ReadmeReady? Have a feature request? We want to hear about it!
8+
9+
Submit a bug report or feature request on [GitHub Issues](https://github.com/souradipp76/ReadMeReady/issues).
10+
11+
# How to develop on this project
412

513
**You need PYTHON3!**
614

@@ -91,23 +99,6 @@ switch-to-poetry: ## Switch to poetry package manager.
9199
init: ## Initialize the project based on an application template.
92100
```
93101

94-
## Making a new release
95-
96-
This project uses [semantic versioning](https://semver.org/) and tags releases with `X.Y.Z`
97-
Every time a new tag is created and pushed to the remote repo, github actions will
98-
automatically create a new release on github and trigger a release on PyPI.
99-
100-
For this to work you need to setup a secret called `PYPI_API_TOKEN` on the project settings>secrets,
101-
this token can be generated on [pypi.org](https://pypi.org/account/).
102-
103-
To trigger a new release all you need to do is.
104-
105-
1. If you have changes to add to the repo
106-
* Make your changes following the steps described above.
107-
* Commit your changes following the [conventional git commit messages](https://www.conventionalcommits.org/en/v1.0.0/).
108-
2. Run the tests to ensure everything is working.
109-
4. Run `make release` to create a new tag and push it to the remote repo.
110-
111-
the `make release` will ask you the version number to create the tag, ex: type `0.1.1` when you are asked.
102+
# Thank You!
112103

113-
> **CAUTION**: The make release will change local changelog files and commit all the unstaged changes you have.
104+
Your contributions to open source, large or small, make great projects like this possible. Thank you for taking the time to contribute.

tests/utils/test_llm_utils.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ def test_get_gemma_chat_model_with_peft():
2525
"device": "cpu",
2626
"peft_model_path": "path/to/peft/model",
2727
}
28-
with patch(
28+
with patch("sys.platform", "linux"), patch(
2929
"readme_ready.utils.llm_utils.hf_hub_download"
3030
) as mock_hf_download, patch(
3131
"readme_ready.utils.llm_utils.get_tokenizer"
@@ -69,6 +69,7 @@ def test_get_gemma_chat_model_with_peft():
6969
gguf_file=model_kwargs["gguf_file"],
7070
trust_remote_code=True,
7171
device_map=model_kwargs["device"],
72+
quantization_config=mock.ANY,
7273
token="test_token",
7374
)
7475
mock_peft_model.assert_called_once_with(
@@ -87,7 +88,7 @@ def test_get_gemma_chat_model_without_peft():
8788
"gguf_file": "some_file.gguf",
8889
"device": "cpu",
8990
}
90-
with patch(
91+
with patch("sys.platform", "linux"), patch(
9192
"readme_ready.utils.llm_utils.hf_hub_download"
9293
) as mock_hf_download, patch(
9394
"readme_ready.utils.llm_utils.get_tokenizer"
@@ -128,6 +129,7 @@ def test_get_gemma_chat_model_without_peft():
128129
gguf_file=model_kwargs["gguf_file"],
129130
trust_remote_code=True,
130131
device_map=model_kwargs["device"],
132+
quantization_config=mock.ANY,
131133
token="test_token",
132134
)
133135
mock_peft_model.assert_not_called()
@@ -205,7 +207,7 @@ def test_get_llama_chat_model_with_peft():
205207
"device": "cpu",
206208
"peft_model": "path/to/peft/model",
207209
}
208-
with patch(
210+
with patch("sys.platform", "linux"), patch(
209211
"readme_ready.utils.llm_utils.hf_hub_download"
210212
) as mock_hf_download, patch(
211213
"readme_ready.utils.llm_utils.get_tokenizer"
@@ -252,6 +254,7 @@ def test_get_llama_chat_model_with_peft():
252254
gguf_file=model_kwargs["gguf_file"],
253255
trust_remote_code=True,
254256
device_map=model_kwargs["device"],
257+
quantization_config=mock.ANY,
255258
token="test_token",
256259
)
257260
mock_peft_model.assert_called_once_with(
@@ -270,7 +273,7 @@ def test_get_llama_chat_model_without_peft():
270273
"gguf_file": "some_file.gguf",
271274
"device": "cpu",
272275
}
273-
with patch(
276+
with patch("sys.platform", "linux"), patch(
274277
"readme_ready.utils.llm_utils.hf_hub_download"
275278
) as mock_hf_download, patch(
276279
"readme_ready.utils.llm_utils.get_tokenizer"
@@ -314,6 +317,7 @@ def test_get_llama_chat_model_without_peft():
314317
gguf_file=model_kwargs["gguf_file"],
315318
trust_remote_code=True,
316319
device_map=model_kwargs["device"],
320+
quantization_config=mock.ANY,
317321
token="test_token",
318322
)
319323
mock_peft_model.assert_not_called()

0 commit comments

Comments
 (0)