Skip to content

Conversation

@mare5x
Copy link
Contributor

@mare5x mare5x commented Nov 21, 2025

  • uv run pytest -v -m "not apikey" didn't work with some tests because ChatOpenAI raises an error if no api key is detected even if the model is not used.

@mare5x mare5x requested review from a team and kosstbarz November 21, 2025 12:00
```bash
uv run pytest -v -m "not apikey"
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you delete it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion it is a useless section that will be constantly out of date (as it already is - there is no agents dir). I predicted that would happen here #76 (comment).

@pytest.mark.parametrize("path", example_llm_config_paths, ids=[path.name for path in example_llm_config_paths])
def test_example_llm_configs(path: Path) -> None:
def test_example_llm_configs(path: Path, monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setenv("OPENAI_API_KEY", "test_key")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember we had a problem with monkeypatch. It overwrited the real API KEY for smoke tests.
Let's think how to keep tests independent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the documentation:

All modifications will be undone after the requesting test function or fixture has finished.

I believe the problem before was with autouse=True in a fixture and not with monkeypatch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About autouse -

autouse –
If True, the fixture func is activated for all tests that can see it.

I'm not sure it should work for tests in different files, but it was the case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really I can't reproduce this problem with monkeypatch + autouse. Probably I'm missing some details.

Copy link
Contributor Author

@mare5x mare5x Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kosstbarz I figured it out. The problem was in @cached_property in LLMConfig: https://github.com/JetBrains/databao/blob/7fde8ffb8f23aeb2ec47e60c2d4ab1ab1021d985/databao/configs/llm.py#L61
LLMConfig.chat_model was returning the cached LLM from whatever was the first test because the cache is global across all tests.

I will make a PR (#141) to remove @cached_property to avoid similar future problems.

@mare5x mare5x merged commit fe3a077 into main Nov 21, 2025
8 checks passed
@mare5x mare5x deleted the mhostnik/fix-tests-and-readme branch November 21, 2025 13:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants