Skip to content

Commit 7d244b4

Browse files
cosminachoclaude
andcommitted
fix: make max_tokens default to None and bump version to 0.10.5
Per review feedback: max_tokens=1000 was too prescriptive — leave it unset by default so the underlying client picks the right limit per model. temperature=0.0 and max_retries=3 defaults retained. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 4cba52d commit 7d244b4

3 files changed

Lines changed: 6 additions & 8 deletions

File tree

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "uipath-langchain"
3-
version = "0.10.4"
3+
version = "0.10.5"
44
description = "Python SDK that enables developers to build and deploy LangGraph agents to the UiPath Cloud Platform"
55
readme = { file = "README.md", content-type = "text/markdown" }
66
requires-python = ">=3.11"

src/uipath_langchain/chat/chat_model_factory.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@
2525

2626
_UNSET: Final[Any] = object()
2727
DEFAULT_TIMEOUT_SECONDS: Final[float] = 300.0
28-
DEFAULT_MAX_TOKENS: Final[int] = 1000
2928
DEFAULT_TEMPERATURE: Final[float] = 0.0
3029
DEFAULT_MAX_RETRIES: Final[int] = 3
3130

@@ -40,7 +39,7 @@ def get_chat_model(
4039
api_flavor: ApiFlavor | str | None = None,
4140
custom_class: type[UiPathBaseChatModel] | None = None,
4241
temperature: float | None = DEFAULT_TEMPERATURE,
43-
max_tokens: int | None = DEFAULT_MAX_TOKENS,
42+
max_tokens: int | None = None,
4443
timeout: float | None = DEFAULT_TIMEOUT_SECONDS,
4544
max_retries: int | None = DEFAULT_MAX_RETRIES,
4645
callbacks: Callbacks = _UNSET,
@@ -63,9 +62,8 @@ def get_chat_model(
6362
instead of the auto-detected one.
6463
temperature: Sampling temperature. Defaults to 0.0. Pass ``None`` to
6564
omit the parameter when the underlying client supports it.
66-
max_tokens: Maximum output tokens. Defaults to 1000 to match the
67-
historical default from ``UiPathRequestMixin``. Pass ``None`` to
68-
omit the limit when the underlying client supports it.
65+
max_tokens: Maximum output tokens. Defaults to ``None`` (unset), which
66+
lets the underlying client apply its own default.
6967
timeout: Request timeout in seconds. Defaults to 300 seconds.
7068
max_retries: Max retry count. Defaults to 3.
7169
callbacks: LangChain callbacks (handlers or a manager) attached to the
@@ -138,7 +136,7 @@ def _legacy_chat_model(
138136
return _legacy_get_chat_model(
139137
model,
140138
temperature if temperature is not _UNSET and temperature is not None else 0.0,
141-
max_tokens if max_tokens is not _UNSET and max_tokens is not None else DEFAULT_MAX_TOKENS,
139+
max_tokens if max_tokens is not _UNSET and max_tokens is not None else 0,
142140
agenthub_config,
143141
byo_connection_id,
144142
**kwargs,

uv.lock

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)