Skip to content

Memiris: Harden memory management#456

Open
Hialus wants to merge 19 commits intomainfrom
chore/memiris/improve-prompts
Open

Memiris: Harden memory management#456
Hialus wants to merge 19 commits intomainfrom
chore/memiris/improve-prompts

Conversation

@Hialus
Copy link
Copy Markdown
Member

@Hialus Hialus commented Mar 12, 2026

Motivation

Memiris lacked a retry mechanism. This meant that a single failed request caused the entire creation to fail.
Additionally, there were several edge cases that could cause the memory creation to fail or be of lower quality.

Description

  • Added a retry mechanism
  • Improved a large part of the prompts to give the models more guidance
  • Hardened the memory sleep to recover from partial failures on future runs
  • Expanded the Memiris test suite
  • Improved the parser at the end of the memory creation

Summary by CodeRabbit

  • New Features

    • Per-user personal memory (Memiris) is now optionally available in chats to recall and tailor responses when memories exist.
  • Improvements

    • System prompts expanded with explicit memory-handling guidance: stay on topic, avoid guessing, and return an explicit empty result when appropriate.
    • LLM calls use configurable retry behavior and now log which model backend is used.
    • Progress stages updated to include "Extracting memories".
  • Tests

    • Extensive unit tests added for retry logic and LLM integrations.

@Hialus Hialus requested a review from a team as a code owner March 12, 2026 15:02
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a global retry configuration and runtime retry wrapper for LLM calls, integrates Memiris into the chat pipeline (conditional tooling, accessed memory propagation, prompt exposure), augments system prompt templates and focus content, and adds RetryConfig/LlmConfigService plus provider retries and extensive unit tests.

Changes

Cohort / File(s) Summary
Chat Pipeline
iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py
Gate Memiris tooling on user.memiris_enabled; initialize and propagate state.accessed_memory_storage; add memory search tools when memories exist; compute and inject allow_memiris_tool into system prompt context; pass accessed memories to post-agent callback.
System Prompt Templates
iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2, iris/src/iris/pipeline/prompts/templates/exercise_chat_system_prompt.j2
Insert conditional Memiris guidance blocks, examples, phrasing rules, do/don't guidance, and fallback messaging for when Memiris is unavailable.
Memiris Setup / Focus Content
iris/src/iris/common/memiris_setup.py
Import LlmConfigService and call LlmConfigService.configure_retry_params(...) at module load; extend focus content blocks with explicit "STAY ON TOPIC", no-guessing extraction rules, and empty-result fallback; add logging of backend used when creating memories.
LLM Retry Service (API)
memiris/src/memiris/api/llm_config_service.py
New LlmConfigService to set/partially-update/get/reset the global RetryConfig via configure_retry, configure_retry_params, get_retry_config, and reset_retry_config.
Retry Mechanism
memiris/src/memiris/llm/retry_config.py
New RetryConfig dataclass and global get_retry_config/set_retry_config; call_with_retry(fn) implements exponential-backoff retry behavior and logging according to the global config.
Providers: retry wrappers
memiris/src/memiris/llm/openai_language_model.py, memiris/src/memiris/llm/ollama_language_model.py
Wrap OpenAI and Ollama chat/embed API calls with call_with_retry to apply configured retry logic; method signatures unchanged.
Tests: config & LLMs
memiris/tests/memiris_tests/api/test_llm_config_service.py, memiris/tests/memiris_tests/llm_tests/test_retry_config.py, memiris/tests/memiris_tests/llm_tests/test_ollama_language_model.py, memiris/tests/memiris_tests/llm_tests/test_openai_language_model.py
Add comprehensive unit tests for LlmConfigService, RetryConfig, call_with_retry, and retry integration in Ollama/OpenAI models, including backoff/sleep behavior and retry edge cases.
Status / Progress
iris/src/iris/web/status/status_update.py
Insert a new NOT_STARTED stage "Extracting memories" into ExerciseChatStatusCallback initial stages sequence to reflect memory extraction step.
Minor: Logging & Guidance
iris/.../memiris_setup.py, other template/content files
Add logging to indicate memory backend (OpenAI vs Ollama) when creating memories; expand memory-extraction guidance blocks with explicit extraction and no-guessing rules.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant ChatAgent as Chat Agent Pipeline
    participant MemoryStorage as Memory Storage
    participant SystemPrompt as System Prompt Renderer
    participant LLMClient as LLM Provider Client
    participant RetryHandler as Retry Handler

    User->>ChatAgent: Send message (check user.memiris_enabled)
    alt memiris enabled
        ChatAgent->>MemoryStorage: Initialize accessed_memory_storage
        ChatAgent->>MemoryStorage: Query existing memories
        MemoryStorage-->>ChatAgent: Return memories
        ChatAgent->>SystemPrompt: Inject allow_memiris_tool + memiris guidance
        ChatAgent->>ChatAgent: Append memory_search/find_similar_memories tools
    else memiris disabled
        ChatAgent->>SystemPrompt: Inject memiris-unavailable guidance
    end
    ChatAgent->>LLMClient: Send chat request (prompt + tools)
    LLMClient->>RetryHandler: call_with_retry(API call)
    alt retry attempt fails and attempts < max_attempts
        RetryHandler->>RetryHandler: Sleep (exponential backoff)
        RetryHandler->>LLMClient: Retry API call
    else success or exhausted
        RetryHandler-->>LLMClient: Return result or raise
    end
    LLMClient-->>ChatAgent: Return response
    ChatAgent-->>User: Deliver response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I nibble at configs, soft and spry,

I teach retries to try, try, try,
I tuck memories neatly into prompts,
No guessing—only facts in tidy stumps,
A little hop makes the pipeline sigh.

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.08% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title mentions 'Harden memory management' but the primary changes are adding a retry mechanism and improving prompts for Memiris, not hardening memory management itself. Consider a more descriptive title like 'Add retry mechanism and improve Memiris prompts' to better reflect the actual changes in the PR.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/memiris/improve-prompts
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can enforce grammar and style rules using `languagetool`.

Configure the reviews.tools.languagetool setting to enable/disable rules and categories. Refer to the LanguageTool Community to learn more.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py (2)

262-267: Avoid recomputing Memiris availability twice per run.

state.memiris_wrapper.has_memories() is called in both tool assembly and prompt building; cache it once to avoid duplicate backend lookups and possible tool/prompt mismatch on transient failures.

♻️ Suggested caching approach
-        allow_memiris_tool = bool(
-            state.dto.user
-            and state.dto.user.memiris_enabled
-            and state.memiris_wrapper
-            and state.memiris_wrapper.has_memories()
-        )
+        allow_memiris_tool = getattr(state, "allow_memiris_tool", None)
+        if allow_memiris_tool is None:
+            allow_memiris_tool = bool(
+                state.dto.user
+                and state.dto.user.memiris_enabled
+                and state.memiris_wrapper
+                and state.memiris_wrapper.has_memories()
+            )
+            state.allow_memiris_tool = allow_memiris_tool
-        allow_memiris_tool = bool(
-            state.dto.user
-            and state.dto.user.memiris_enabled
-            and state.memiris_wrapper
-            and state.memiris_wrapper.has_memories()
-        )
+        allow_memiris_tool = getattr(state, "allow_memiris_tool", None)
+        if allow_memiris_tool is None:
+            allow_memiris_tool = bool(
+                state.dto.user
+                and state.dto.user.memiris_enabled
+                and state.memiris_wrapper
+                and state.memiris_wrapper.has_memories()
+            )
+            state.allow_memiris_tool = allow_memiris_tool

Also applies to: 319-324

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py` around lines 262
- 267, Cache the result of state.memiris_wrapper.has_memories() in a local
variable (e.g. memiris_has_memories) and use that variable both when computing
allow_memiris_tool (the existing expression using state.dto.user,
state.dto.user.memiris_enabled, state.memiris_wrapper) and later when building
the prompt so you don't call state.memiris_wrapper.has_memories() twice; update
references in the tool-assembly code path and the prompt-building code path (the
locations that currently call has_memories()) to read from the cached
memiris_has_memories variable to avoid duplicate backend lookups and transient
mismatch.

208-209: Prefer direct attribute assignment over constant-name setattr.

This triggers Ruff B010 and reduces readability for no benefit at Line 209.

♻️ Suggested cleanup
-        if not hasattr(state, "accessed_memory_storage"):
-            setattr(state, "accessed_memory_storage", [])
+        if not hasattr(state, "accessed_memory_storage"):
+            state.accessed_memory_storage = []
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py` around lines 208
- 209, The code uses setattr(state, "accessed_memory_storage", []) which
triggers Ruff B010; replace that call with a direct attribute assignment on the
state object: after verifying hasattr(state, "accessed_memory_storage") is
False, set state.accessed_memory_storage = [] (preserve the existing hasattr
check and the same attribute name) to improve readability and satisfy the
linter.
memiris/src/memiris/llm/retry_config.py (2)

118-124: Minor: Replace en-dash with hyphen-minus in log message.

Line 119 uses an en-dash (, U+2013) instead of a standard hyphen-minus (-, U+002D). This could cause display issues in some terminals or log viewers.

🔧 Fix
             logger.warning(
-                "LLM call failed (attempt %d/%d): %s – retrying in %.1fs …",
+                "LLM call failed (attempt %d/%d): %s - retrying in %.1fs ...",
                 attempt,
                 config.max_attempts,
                 exc,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@memiris/src/memiris/llm/retry_config.py` around lines 118 - 124, The log
message in the retry logic uses an en-dash character in the string passed to
logger.warning; update the format string used in retry_config.py (the
logger.warning call in the retry loop) to replace the en-dash (U+2013) with a
standard hyphen-minus (`-`) so the message becomes "LLM call failed (attempt
%d/%d): %s - retrying in %.1fs …", keeping the same interpolation variables
(attempt, config.max_attempts, exc, delay) and leaving the rest of the
logger.warning call unchanged.

44-46: Consider narrowing the default exception types for retries.

The default exceptions=(Exception,) will retry on any exception, including programming errors like ValueError, TypeError, or KeyError that typically indicate bugs rather than transient failures. Consider defaulting to network/API-specific exceptions that are actually transient.

💡 Example: More targeted default exceptions
     exceptions: Tuple[Type[BaseException], ...] = field(
-        default_factory=lambda: (Exception,)
+        default_factory=lambda: (ConnectionError, TimeoutError, OSError)
     )

Alternatively, document that users should explicitly configure exceptions for production use with the specific exception types from their LLM provider (e.g., openai.APIConnectionError, openai.RateLimitError).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@memiris/src/memiris/llm/retry_config.py` around lines 44 - 46, The current
default for the exceptions field in retry_config (exceptions:
Tuple[Type[BaseException], ...]) is too broad; change the default_factory so
retries target transient/network/provider errors instead of all Exceptions—e.g.,
use provider/network-specific exceptions like
requests.exceptions.RequestException, httpx.HTTPError and common LLM API errors
(openai.APIConnectionError, openai.RateLimitError) or make exceptions required
(no permissive default) and document that callers must supply provider-specific
exception types; update the exceptions field default and README/docs accordingly
and reference the field name "exceptions" in retry_config.py (and any
RetryConfig class) when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@memiris/tests/memiris_tests/api/test_llm_config_service.py`:
- Around line 12-16: The fixture reset_retry_config currently only restores
global retry config after yielding, so tests can inherit mutated state; modify
the reset_retry_config fixture to call set_retry_config(RetryConfig()) before
the yield as well as after it (i.e., perform a pre-reset then yield then
post-reset) so global state is reset both before and after each test; locate the
reset_retry_config fixture in the test file and add the pre-yield
set_retry_config(RetryConfig()) call.

In `@memiris/tests/memiris_tests/llm_tests/test_retry_config.py`:
- Line 15: Replace the EN DASH (–) characters in the test file comments with a
normal hyphen (-) to satisfy Ruff RUF003; specifically update the comment text
"RetryConfig – construction & validation" (and the other comment containing "–"
around line 169) to use "RetryConfig - construction & validation" (and the
corresponding comment) so only ASCII hyphens remain.
- Around line 23-28: The test currently asserts RetryConfig() defaults
exceptions to (Exception,), which is too broad; update the RetryConfig default
to only include transient exception classes (e.g., network/timeout/temporary
errors used in the codebase) and change the test in test_retry_config.py to
assert cfg.exceptions equals that narrower tuple; locate the RetryConfig class
and its constructor to replace the default exceptions parameter and adjust the
test assertion to match the new transient-only default (referencing RetryConfig
and cfg in the test).

---

Nitpick comments:
In `@iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py`:
- Around line 262-267: Cache the result of state.memiris_wrapper.has_memories()
in a local variable (e.g. memiris_has_memories) and use that variable both when
computing allow_memiris_tool (the existing expression using state.dto.user,
state.dto.user.memiris_enabled, state.memiris_wrapper) and later when building
the prompt so you don't call state.memiris_wrapper.has_memories() twice; update
references in the tool-assembly code path and the prompt-building code path (the
locations that currently call has_memories()) to read from the cached
memiris_has_memories variable to avoid duplicate backend lookups and transient
mismatch.
- Around line 208-209: The code uses setattr(state, "accessed_memory_storage",
[]) which triggers Ruff B010; replace that call with a direct attribute
assignment on the state object: after verifying hasattr(state,
"accessed_memory_storage") is False, set state.accessed_memory_storage = []
(preserve the existing hasattr check and the same attribute name) to improve
readability and satisfy the linter.

In `@memiris/src/memiris/llm/retry_config.py`:
- Around line 118-124: The log message in the retry logic uses an en-dash
character in the string passed to logger.warning; update the format string used
in retry_config.py (the logger.warning call in the retry loop) to replace the
en-dash (U+2013) with a standard hyphen-minus (`-`) so the message becomes "LLM
call failed (attempt %d/%d): %s - retrying in %.1fs …", keeping the same
interpolation variables (attempt, config.max_attempts, exc, delay) and leaving
the rest of the logger.warning call unchanged.
- Around line 44-46: The current default for the exceptions field in
retry_config (exceptions: Tuple[Type[BaseException], ...]) is too broad; change
the default_factory so retries target transient/network/provider errors instead
of all Exceptions—e.g., use provider/network-specific exceptions like
requests.exceptions.RequestException, httpx.HTTPError and common LLM API errors
(openai.APIConnectionError, openai.RateLimitError) or make exceptions required
(no permissive default) and document that callers must supply provider-specific
exception types; update the exceptions field default and README/docs accordingly
and reference the field name "exceptions" in retry_config.py (and any
RetryConfig class) when making the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0448aea6-2ffb-4689-b9f4-9fba649bdaad

📥 Commits

Reviewing files that changed from the base of the PR and between 04a73ad and cd14e15.

📒 Files selected for processing (12)
  • iris/src/iris/common/memiris_setup.py
  • iris/src/iris/pipeline/chat/exercise_chat_agent_pipeline.py
  • iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2
  • iris/src/iris/pipeline/prompts/templates/exercise_chat_system_prompt.j2
  • memiris/src/memiris/api/llm_config_service.py
  • memiris/src/memiris/llm/ollama_language_model.py
  • memiris/src/memiris/llm/openai_language_model.py
  • memiris/src/memiris/llm/retry_config.py
  • memiris/tests/memiris_tests/api/test_llm_config_service.py
  • memiris/tests/memiris_tests/llm_tests/test_ollama_language_model.py
  • memiris/tests/memiris_tests/llm_tests/test_openai_language_model.py
  • memiris/tests/memiris_tests/llm_tests/test_retry_config.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
iris/src/iris/common/memiris_setup.py (1)

53-60: Extract the shared prompt policy into one reusable suffix.

These three additions encode the same “stay on topic / explicit only / empty result” policy in parallel. That will drift quickly the next time the prompt rules change. A shared constant or small prompt-builder helper would keep personal details, requirements, and facts aligned.

Also applies to: 74-80, 93-101

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@iris/src/iris/common/memiris_setup.py` around lines 53 - 60, Extract the
repeated "STAY ON TOPIC..." policy text into a single reusable constant or small
helper and replace the three inline copies with references to that symbol (e.g.,
PROMPT_SUFFIX or build_prompt_suffix()); update iris.common.memiris_setup.py so
the original duplicate blocks at the locations around the current lines 53,
74-80, and 93-101 are replaced by a single source of truth, and ensure any
callers use the constant/helper name to assemble the final prompt.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@iris/src/iris/common/memiris_setup.py`:
- Around line 107-110: The module-level call setting global retry params should
not run on import; remove or guard the direct call to
LlmConfigService.configure_retry_params in memiris_setup.py and instead invoke
it from an explicit Iris startup/init path or a one-time initializer that checks
settings.memiris (e.g., add a function like init_memiris_retry() called during
app startup). Ensure the initializer uses a thread-safe one-time guard so the
singleton is configured only once and only when settings.memiris is enabled,
referencing LlmConfigService.configure_retry_params and the settings.memiris
flag.

---

Nitpick comments:
In `@iris/src/iris/common/memiris_setup.py`:
- Around line 53-60: Extract the repeated "STAY ON TOPIC..." policy text into a
single reusable constant or small helper and replace the three inline copies
with references to that symbol (e.g., PROMPT_SUFFIX or build_prompt_suffix());
update iris.common.memiris_setup.py so the original duplicate blocks at the
locations around the current lines 53, 74-80, and 93-101 are replaced by a
single source of truth, and ensure any callers use the constant/helper name to
assemble the final prompt.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 51b19213-2027-46aa-9327-3aff9f3837fa

📥 Commits

Reviewing files that changed from the base of the PR and between cd14e15 and e0eddc3.

📒 Files selected for processing (1)
  • iris/src/iris/common/memiris_setup.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2 (1)

100-105: Scope the “don’t say last time” rule to Memiris-sourced memories only.

Line 102 and Line 104 currently ban temporal phrasing globally, which can clash with legitimate references to visible chat history in the same conversation.

Proposed prompt fix
-✗ DON'T say: "According to Memiris...", "My memory system shows...", "I accessed your memory...", "Last time we talked..."
+✗ DON'T say: "According to Memiris...", "My memory system shows...", "I accessed your memory..."
 
-Avoid time-based references like "last time" or "before" - memories aren't necessarily chronological. Just naturally know things about the student, as a human tutor would after working with someone for a while.
+Avoid time-based references like "last time" or "before" when drawing on Memiris-sourced memories, since they aren't necessarily chronological. For explicit chat-history context in this conversation, normal temporal phrasing is fine.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2` around
lines 100 - 105, The prompt currently bans time-based phrasing globally;
restrict that ban to Memiris-sourced memories only by changing the sentence
"Avoid time-based references like 'last time' or 'before' - memories aren't
necessarily chronological." so it reads something like "Avoid time-based
references like 'last time' or 'before' when referring to Memiris-sourced
memories — visible chat history may be referenced normally." Update the template
block in course_chat_system_prompt.j2 between the "When you do explicitly
reference what you know:" section and the "{% else %}" token, preserving the
examples and negative wording but scoping the temporal restriction to Memiris,
and keep the example forbidden phrases ("According to Memiris...", "My memory
system shows...") unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2`:
- Around line 83-90: The prompt currently forces an unconditional memory lookup
by the hard requirement "You must make use of Memiris!" in
course_chat_system_prompt.j2; change that sentence to an optional,
guidance-style instruction (e.g., "When relevant, check Memiris for memories
before responding") so Memiris is suggested rather than required, and ensure
wording aligns with the later "use tools if useful" policy (lines referenced
around the later tool guidance) to avoid mandatory lookups and reduce
latency/failure coupling.
- Around line 108-110: The prompt section titled "How to Handle Not Having
Memories" incorrectly states that no memory capabilities exist; update the
wording in the course_chat_system_prompt.j2 template (the "How to Handle Not
Having Memories" block) to clarify that only long-term memory is unavailable in
the no-Memiris path while in-chat conversation history (chat history provided
around lines ~135-140) should still be used for context; revise the sentence to
explicitly say "long-term memory is unavailable, but use the provided chat
history for in-conversation context" and ensure any subsequent guidance
references using the in-chat history rather than discarding all memory.

---

Nitpick comments:
In `@iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2`:
- Around line 100-105: The prompt currently bans time-based phrasing globally;
restrict that ban to Memiris-sourced memories only by changing the sentence
"Avoid time-based references like 'last time' or 'before' - memories aren't
necessarily chronological." so it reads something like "Avoid time-based
references like 'last time' or 'before' when referring to Memiris-sourced
memories — visible chat history may be referenced normally." Update the template
block in course_chat_system_prompt.j2 between the "When you do explicitly
reference what you know:" section and the "{% else %}" token, preserving the
examples and negative wording but scoping the temporal restriction to Memiris,
and keep the example forbidden phrases ("According to Memiris...", "My memory
system shows...") unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 69f25b7d-c595-4560-8c4c-9366cf99c2e1

📥 Commits

Reviewing files that changed from the base of the PR and between 68e91bd and a7fa41b.

📒 Files selected for processing (2)
  • iris/src/iris/pipeline/prompts/templates/course_chat_system_prompt.j2
  • iris/src/iris/pipeline/prompts/templates/exercise_chat_system_prompt.j2
🚧 Files skipped from review as they are similar to previous changes (1)
  • iris/src/iris/pipeline/prompts/templates/exercise_chat_system_prompt.j2

@bassner bassner added this to the 2.0.0 milestone Mar 13, 2026
@Hialus Hialus changed the title Memiris: Add retry mechanism and improve prompting Memiris: Harden memory management Mar 14, 2026
@github-actions
Copy link
Copy Markdown

There hasn't been any activity on this pull request recently. Therefore, this pull request has been automatically marked as stale and will be closed if no further activity occurs within seven days. Thank you for your contributions.

@github-actions github-actions bot added the stale label Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants