Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a global retry configuration and runtime retry wrapper for LLM calls, integrates Memiris into the chat pipeline (conditional tooling, accessed memory propagation, prompt exposure), augments system prompt templates and focus content, and adds RetryConfig/LlmConfigService plus provider retries and extensive unit tests. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant ChatAgent as Chat Agent Pipeline
participant MemoryStorage as Memory Storage
participant SystemPrompt as System Prompt Renderer
participant LLMClient as LLM Provider Client
participant RetryHandler as Retry Handler
User->>ChatAgent: Send message (check user.memiris_enabled)
alt memiris enabled
ChatAgent->>MemoryStorage: Initialize accessed_memory_storage
ChatAgent->>MemoryStorage: Query existing memories
MemoryStorage-->>ChatAgent: Return memories
ChatAgent->>SystemPrompt: Inject allow_memiris_tool + memiris guidance
ChatAgent->>ChatAgent: Append memory_search/find_similar_memories tools
else memiris disabled
ChatAgent->>SystemPrompt: Inject memiris-unavailable guidance
end
ChatAgent->>LLMClient: Send chat request (prompt + tools)
LLMClient->>RetryHandler: call_with_retry(API call)
alt retry attempt fails and attempts < max_attempts
RetryHandler->>RetryHandler: Sleep (exponential backoff)
RetryHandler->>LLMClient: Retry API call
else success or exhausted
RetryHandler-->>LLMClient: Return result or raise
end
LLMClient-->>ChatAgent: Return response
ChatAgent-->>User: Deliver response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py (2)
262-267: Avoid recomputing Memiris availability twice per run.
state.memiris_wrapper.has_memories()is called in both tool assembly and prompt building; cache it once to avoid duplicate backend lookups and possible tool/prompt mismatch on transient failures.♻️ Suggested caching approach
- allow_memiris_tool = bool( - state.dto.user - and state.dto.user.memiris_enabled - and state.memiris_wrapper - and state.memiris_wrapper.has_memories() - ) + allow_memiris_tool = getattr(state, "allow_memiris_tool", None) + if allow_memiris_tool is None: + allow_memiris_tool = bool( + state.dto.user + and state.dto.user.memiris_enabled + and state.memiris_wrapper + and state.memiris_wrapper.has_memories() + ) + state.allow_memiris_tool = allow_memiris_tool- allow_memiris_tool = bool( - state.dto.user - and state.dto.user.memiris_enabled - and state.memiris_wrapper - and state.memiris_wrapper.has_memories() - ) + allow_memiris_tool = getattr(state, "allow_memiris_tool", None) + if allow_memiris_tool is None: + allow_memiris_tool = bool( + state.dto.user + and state.dto.user.memiris_enabled + and state.memiris_wrapper + and state.memiris_wrapper.has_memories() + ) + state.allow_memiris_tool = allow_memiris_toolAlso applies to: 319-324
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py` around lines 262 - 267, Cache the result of state.memiris_wrapper.has_memories() in a local variable (e.g. memiris_has_memories) and use that variable both when computing allow_memiris_tool (the existing expression using state.dto.user, state.dto.user.memiris_enabled, state.memiris_wrapper) and later when building the prompt so you don't call state.memiris_wrapper.has_memories() twice; update references in the tool-assembly code path and the prompt-building code path (the locations that currently call has_memories()) to read from the cached memiris_has_memories variable to avoid duplicate backend lookups and transient mismatch.
208-209: Prefer direct attribute assignment over constant-namesetattr.This triggers Ruff
B010and reduces readability for no benefit at Line 209.♻️ Suggested cleanup
- if not hasattr(state, "accessed_memory_storage"): - setattr(state, "accessed_memory_storage", []) + if not hasattr(state, "accessed_memory_storage"): + state.accessed_memory_storage = []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py` around lines 208 - 209, The code uses setattr(state, "accessed_memory_storage", []) which triggers Ruff B010; replace that call with a direct attribute assignment on the state object: after verifying hasattr(state, "accessed_memory_storage") is False, set state.accessed_memory_storage = [] (preserve the existing hasattr check and the same attribute name) to improve readability and satisfy the linter.memiris/src/memiris/llm/retry_config.py (2)
118-124: Minor: Replace en-dash with hyphen-minus in log message.Line 119 uses an en-dash (
–, U+2013) instead of a standard hyphen-minus (-, U+002D). This could cause display issues in some terminals or log viewers.🔧 Fix
logger.warning( - "LLM call failed (attempt %d/%d): %s – retrying in %.1fs …", + "LLM call failed (attempt %d/%d): %s - retrying in %.1fs ...", attempt, config.max_attempts, exc,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@memiris/src/memiris/llm/retry_config.py` around lines 118 - 124, The log message in the retry logic uses an en-dash character in the string passed to logger.warning; update the format string used in retry_config.py (the logger.warning call in the retry loop) to replace the en-dash (U+2013) with a standard hyphen-minus (`-`) so the message becomes "LLM call failed (attempt %d/%d): %s - retrying in %.1fs …", keeping the same interpolation variables (attempt, config.max_attempts, exc, delay) and leaving the rest of the logger.warning call unchanged.
44-46: Consider narrowing the default exception types for retries.The default
exceptions=(Exception,)will retry on any exception, including programming errors likeValueError,TypeError, orKeyErrorthat typically indicate bugs rather than transient failures. Consider defaulting to network/API-specific exceptions that are actually transient.💡 Example: More targeted default exceptions
exceptions: Tuple[Type[BaseException], ...] = field( - default_factory=lambda: (Exception,) + default_factory=lambda: (ConnectionError, TimeoutError, OSError) )Alternatively, document that users should explicitly configure
exceptionsfor production use with the specific exception types from their LLM provider (e.g.,openai.APIConnectionError,openai.RateLimitError).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@memiris/src/memiris/llm/retry_config.py` around lines 44 - 46, The current default for the exceptions field in retry_config (exceptions: Tuple[Type[BaseException], ...]) is too broad; change the default_factory so retries target transient/network/provider errors instead of all Exceptions—e.g., use provider/network-specific exceptions like requests.exceptions.RequestException, httpx.HTTPError and common LLM API errors (openai.APIConnectionError, openai.RateLimitError) or make exceptions required (no permissive default) and document that callers must supply provider-specific exception types; update the exceptions field default and README/docs accordingly and reference the field name "exceptions" in retry_config.py (and any RetryConfig class) when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@memiris/tests/memiris_tests/api/test_llm_config_service.py`:
- Around line 12-16: The fixture reset_retry_config currently only restores
global retry config after yielding, so tests can inherit mutated state; modify
the reset_retry_config fixture to call set_retry_config(RetryConfig()) before
the yield as well as after it (i.e., perform a pre-reset then yield then
post-reset) so global state is reset both before and after each test; locate the
reset_retry_config fixture in the test file and add the pre-yield
set_retry_config(RetryConfig()) call.
In `@memiris/tests/memiris_tests/llm_tests/test_retry_config.py`:
- Line 15: Replace the EN DASH (–) characters in the test file comments with a
normal hyphen (-) to satisfy Ruff RUF003; specifically update the comment text
"RetryConfig – construction & validation" (and the other comment containing "–"
around line 169) to use "RetryConfig - construction & validation" (and the
corresponding comment) so only ASCII hyphens remain.
- Around line 23-28: The test currently asserts RetryConfig() defaults
exceptions to (Exception,), which is too broad; update the RetryConfig default
to only include transient exception classes (e.g., network/timeout/temporary
errors used in the codebase) and change the test in test_retry_config.py to
assert cfg.exceptions equals that narrower tuple; locate the RetryConfig class
and its constructor to replace the default exceptions parameter and adjust the
test assertion to match the new transient-only default (referencing RetryConfig
and cfg in the test).
---
Nitpick comments:
In `@iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py`:
- Around line 262-267: Cache the result of state.memiris_wrapper.has_memories()
in a local variable (e.g. memiris_has_memories) and use that variable both when
computing allow_memiris_tool (the existing expression using state.dto.user,
state.dto.user.memiris_enabled, state.memiris_wrapper) and later when building
the prompt so you don't call state.memiris_wrapper.has_memories() twice; update
references in the tool-assembly code path and the prompt-building code path (the
locations that currently call has_memories()) to read from the cached
memiris_has_memories variable to avoid duplicate backend lookups and transient
mismatch.
- Around line 208-209: The code uses setattr(state, "accessed_memory_storage",
[]) which triggers Ruff B010; replace that call with a direct attribute
assignment on the state object: after verifying hasattr(state,
"accessed_memory_storage") is False, set state.accessed_memory_storage = []
(preserve the existing hasattr check and the same attribute name) to improve
readability and satisfy the linter.
In `@memiris/src/memiris/llm/retry_config.py`:
- Around line 118-124: The log message in the retry logic uses an en-dash
character in the string passed to logger.warning; update the format string used
in retry_config.py (the logger.warning call in the retry loop) to replace the
en-dash (U+2013) with a standard hyphen-minus (`-`) so the message becomes "LLM
call failed (attempt %d/%d): %s - retrying in %.1fs …", keeping the same
interpolation variables (attempt, config.max_attempts, exc, delay) and leaving
the rest of the logger.warning call unchanged.
- Around line 44-46: The current default for the exceptions field in
retry_config (exceptions: Tuple[Type[BaseException], ...]) is too broad; change
the default_factory so retries target transient/network/provider errors instead
of all Exceptions—e.g., use provider/network-specific exceptions like
requests.exceptions.RequestException, httpx.HTTPError and common LLM API errors
(openai.APIConnectionError, openai.RateLimitError) or make exceptions required
(no permissive default) and document that callers must supply provider-specific
exception types; update the exceptions field default and README/docs accordingly
and reference the field name "exceptions" in retry_config.py (and any
RetryConfig class) when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0448aea6-2ffb-4689-b9f4-9fba649bdaad
📒 Files selected for processing (12)
iris/src/iris/common/memiris_setup.pyiris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.pyiris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2iris/src/iris/pipeline/prompts/templates/exercise_chat_system_prompt.j2memiris/src/memiris/api/llm_config_service.pymemiris/src/memiris/llm/ollama_language_model.pymemiris/src/memiris/llm/openai_language_model.pymemiris/src/memiris/llm/retry_config.pymemiris/tests/memiris_tests/api/test_llm_config_service.pymemiris/tests/memiris_tests/llm_tests/test_ollama_language_model.pymemiris/tests/memiris_tests/llm_tests/test_openai_language_model.pymemiris/tests/memiris_tests/llm_tests/test_retry_config.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
iris/src/iris/common/memiris_setup.py (1)
53-60: Extract the shared prompt policy into one reusable suffix.These three additions encode the same “stay on topic / explicit only / empty result” policy in parallel. That will drift quickly the next time the prompt rules change. A shared constant or small prompt-builder helper would keep personal details, requirements, and facts aligned.
Also applies to: 74-80, 93-101
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@iris/src/iris/common/memiris_setup.py` around lines 53 - 60, Extract the repeated "STAY ON TOPIC..." policy text into a single reusable constant or small helper and replace the three inline copies with references to that symbol (e.g., PROMPT_SUFFIX or build_prompt_suffix()); update iris.common.memiris_setup.py so the original duplicate blocks at the locations around the current lines 53, 74-80, and 93-101 are replaced by a single source of truth, and ensure any callers use the constant/helper name to assemble the final prompt.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@iris/src/iris/common/memiris_setup.py`:
- Around line 107-110: The module-level call setting global retry params should
not run on import; remove or guard the direct call to
LlmConfigService.configure_retry_params in memiris_setup.py and instead invoke
it from an explicit Iris startup/init path or a one-time initializer that checks
settings.memiris (e.g., add a function like init_memiris_retry() called during
app startup). Ensure the initializer uses a thread-safe one-time guard so the
singleton is configured only once and only when settings.memiris is enabled,
referencing LlmConfigService.configure_retry_params and the settings.memiris
flag.
---
Nitpick comments:
In `@iris/src/iris/common/memiris_setup.py`:
- Around line 53-60: Extract the repeated "STAY ON TOPIC..." policy text into a
single reusable constant or small helper and replace the three inline copies
with references to that symbol (e.g., PROMPT_SUFFIX or build_prompt_suffix());
update iris.common.memiris_setup.py so the original duplicate blocks at the
locations around the current lines 53, 74-80, and 93-101 are replaced by a
single source of truth, and ensure any callers use the constant/helper name to
assemble the final prompt.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 51b19213-2027-46aa-9327-3aff9f3837fa
📒 Files selected for processing (1)
iris/src/iris/common/memiris_setup.py
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2 (1)
100-105: Scope the “don’t say last time” rule to Memiris-sourced memories only.Line 102 and Line 104 currently ban temporal phrasing globally, which can clash with legitimate references to visible chat history in the same conversation.
Proposed prompt fix
-✗ DON'T say: "According to Memiris...", "My memory system shows...", "I accessed your memory...", "Last time we talked..." +✗ DON'T say: "According to Memiris...", "My memory system shows...", "I accessed your memory..." -Avoid time-based references like "last time" or "before" - memories aren't necessarily chronological. Just naturally know things about the student, as a human tutor would after working with someone for a while. +Avoid time-based references like "last time" or "before" when drawing on Memiris-sourced memories, since they aren't necessarily chronological. For explicit chat-history context in this conversation, normal temporal phrasing is fine.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2` around lines 100 - 105, The prompt currently bans time-based phrasing globally; restrict that ban to Memiris-sourced memories only by changing the sentence "Avoid time-based references like 'last time' or 'before' - memories aren't necessarily chronological." so it reads something like "Avoid time-based references like 'last time' or 'before' when referring to Memiris-sourced memories — visible chat history may be referenced normally." Update the template block in course_chat_system_prompt.j2 between the "When you do explicitly reference what you know:" section and the "{% else %}" token, preserving the examples and negative wording but scoping the temporal restriction to Memiris, and keep the example forbidden phrases ("According to Memiris...", "My memory system shows...") unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2`:
- Around line 83-90: The prompt currently forces an unconditional memory lookup
by the hard requirement "You must make use of Memiris!" in
course_chat_system_prompt.j2; change that sentence to an optional,
guidance-style instruction (e.g., "When relevant, check Memiris for memories
before responding") so Memiris is suggested rather than required, and ensure
wording aligns with the later "use tools if useful" policy (lines referenced
around the later tool guidance) to avoid mandatory lookups and reduce
latency/failure coupling.
- Around line 108-110: The prompt section titled "How to Handle Not Having
Memories" incorrectly states that no memory capabilities exist; update the
wording in the course_chat_system_prompt.j2 template (the "How to Handle Not
Having Memories" block) to clarify that only long-term memory is unavailable in
the no-Memiris path while in-chat conversation history (chat history provided
around lines ~135-140) should still be used for context; revise the sentence to
explicitly say "long-term memory is unavailable, but use the provided chat
history for in-conversation context" and ensure any subsequent guidance
references using the in-chat history rather than discarding all memory.
---
Nitpick comments:
In `@iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2`:
- Around line 100-105: The prompt currently bans time-based phrasing globally;
restrict that ban to Memiris-sourced memories only by changing the sentence
"Avoid time-based references like 'last time' or 'before' - memories aren't
necessarily chronological." so it reads something like "Avoid time-based
references like 'last time' or 'before' when referring to Memiris-sourced
memories — visible chat history may be referenced normally." Update the template
block in course_chat_system_prompt.j2 between the "When you do explicitly
reference what you know:" section and the "{% else %}" token, preserving the
examples and negative wording but scoping the temporal restriction to Memiris,
and keep the example forbidden phrases ("According to Memiris...", "My memory
system shows...") unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 69f25b7d-c595-4560-8c4c-9366cf99c2e1
📒 Files selected for processing (2)
iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2iris/src/iris/pipeline/prompts/templates/exercise_chat_system_prompt.j2
🚧 Files skipped from review as they are similar to previous changes (1)
- iris/src/iris/pipeline/prompts/templates/exercise_chat_system_prompt.j2
Memiris: Add retry mechanism and improve promptingMemiris: Harden memory management
|
There hasn't been any activity on this pull request recently. Therefore, this pull request has been automatically marked as stale and will be closed if no further activity occurs within seven days. Thank you for your contributions. |
Motivation
Memiris lacked a retry mechanism. This meant that a single failed request caused the entire creation to fail.
Additionally, there were several edge cases that could cause the memory creation to fail or be of lower quality.
Description
Summary by CodeRabbit
New Features
Improvements
Tests