finished the study coach agent#115
Conversation
Entelligence AI Vulnerability ScannerStatus: No security vulnerabilities found Your code passed our comprehensive security analysis. Analyzed 3 files in total |
Review Summary🏷️ Draft Comments (7)
|
WalkthroughThis PR introduces a new AI Study Coach application built with Streamlit, LangGraph, and Memori v3 for long-term memory management. The application provides three core features: a study plan interface for capturing learner profiles, an interactive session tab that generates quizzes and evaluates understanding using LangGraph workflows, and a progress tracking chat interface for querying learning history. The implementation supports multiple database backends (SQLite, PostgreSQL, MySQL, MongoDB) with automatic fallback capabilities. The codebase includes comprehensive documentation, configuration files, and modular components for memory management and quiz generation workflows. Changes
Sequence DiagramThis diagram shows the interactions between components: sequenceDiagram
actor User
participant Streamlit
participant MemoriManager
participant StudyGraph
participant OpenAI
Note over Streamlit: App Initialization
Streamlit->>MemoriManager: get_memori_manager(openai_key, db_url)
activate MemoriManager
MemoriManager-->>Streamlit: manager instance
deactivate MemoriManager
Streamlit->>MemoriManager: get_latest_learner_profile()
activate MemoriManager
MemoriManager-->>Streamlit: profile_dict or None
deactivate MemoriManager
alt Profile not found
Streamlit->>OpenAI: Reconstruct profile from memory
activate OpenAI
OpenAI-->>Streamlit: LearnerProfile JSON
deactivate OpenAI
end
Note over User,Streamlit: Tab 1: Study Plan
User->>Streamlit: Fill profile form & submit
Streamlit->>Streamlit: Create LearnerProfile object
Streamlit->>MemoriManager: log_learner_profile(profile_dict)
activate MemoriManager
MemoriManager-->>Streamlit: Success
deactivate MemoriManager
Note over User,OpenAI: Tab 2: Today's Session
User->>Streamlit: Enter study details & generate quiz
Streamlit->>Streamlit: Create StudyLog object
Streamlit->>StudyGraph: run_initial_verification(profile, log, llm_client)
activate StudyGraph
StudyGraph->>OpenAI: Generate quiz questions
activate OpenAI
OpenAI-->>StudyGraph: Quiz questions
deactivate OpenAI
StudyGraph-->>Streamlit: InitialVerification (quiz, explanation_prompt)
deactivate StudyGraph
Streamlit-->>User: Display quiz questions
User->>Streamlit: Submit quiz answers & explanation
Streamlit->>StudyGraph: run_full_evaluation(profile, log, answers, explanation)
activate StudyGraph
StudyGraph->>OpenAI: Evaluate understanding
activate OpenAI
OpenAI-->>StudyGraph: Score & feedback
deactivate OpenAI
StudyGraph-->>Streamlit: EvaluationResult (score, feedback, next_step)
deactivate StudyGraph
Streamlit->>MemoriManager: log_study_session(summary)
activate MemoriManager
MemoriManager-->>Streamlit: Success
deactivate MemoriManager
Streamlit-->>User: Display results & recommendations
Note over User,OpenAI: Tab 3: Progress & Memory
User->>Streamlit: Ask progress question
Streamlit->>MemoriManager: summarize_progress(prompt)
activate MemoriManager
MemoriManager->>OpenAI: Query with memory context
activate OpenAI
OpenAI-->>MemoriManager: Answer with insights
deactivate OpenAI
MemoriManager-->>Streamlit: Progress summary
deactivate MemoriManager
Streamlit-->>User: Display insights
🔗 Cross-Repository Impact AnalysisEnable automatic detection of breaking changes across your dependent repositories. → Set up now Learn more about Cross-Repository AnalysisWhat It Does
How to Enable
Benefits
Note for WindsurfPlease change the default marketplace provider to the following in the windsurf settings:Marketplace Extension Gallery Service URL: https://marketplace.visualstudio.com/_apis/public/gallery Marketplace Gallery Item URL: https://marketplace.visualstudio.com/items Entelligence.ai can learn from your feedback. Simply add 👍 / 👎 emojis to teach it your preferences. More shortcuts belowEmoji Descriptions:
Interact with the Bot:
Also you can trigger various commands with the bot by doing The current supported commands are
More commands to be added soon. |
|
Unsafe JSON Parsing: Problem: Multiple instances of fragile JSON extraction: Python Fix: to enforce JSON |
There was a problem hiding this comment.
Pull request overview
This PR introduces a comprehensive AI Study Coach application that leverages Memori v3 for persistent learning memory and LangGraph for intelligent quiz-based verification. The application provides a structured approach to tracking study sessions, assessing understanding, and monitoring progress over time through a multi-tab Streamlit interface.
Key Changes:
- Implements a three-tab Streamlit UI for study planning, session tracking with interactive quizzes, and progress analysis
- Integrates Memori v3 with database persistence for long-term storage of learner profiles and study sessions
- Creates a LangGraph workflow that generates custom quizzes and evaluates learner understanding with scoring and feedback
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 16 comments.
Show a summary per file
| File | Description |
|---|---|
app.py |
Main Streamlit application with sidebar configuration, profile management, quiz interaction, and Memori-powered chat interface |
study_graph.py |
LangGraph workflow implementation with Pydantic models for learner profiles, study logs, and quiz generation/evaluation nodes |
memory_utils.py |
MemoriManager class providing abstraction over Memori, OpenAI client, and database connectivity with helper methods for logging and retrieval |
pyproject.toml |
Project configuration defining dependencies and Python version requirements |
README.md |
Comprehensive documentation with feature descriptions, installation instructions, and usage examples |
.streamlit/config.toml |
Streamlit theme configuration for the application UI |
assets/Memori_Logo.png |
Branding asset for the application header |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| raise RuntimeError( | ||
| "MEMORI_DB_URL is not set – please provide a CockroachDB URL " | ||
| "like postgresql+psycopg://user:password@host:26257/database" | ||
| ) |
There was a problem hiding this comment.
The docstring mentions CockroachDB but the code actually supports multiple database backends (SQLite, PostgreSQL, MySQL, MongoDB) as mentioned in the PR description and class docstring. The initialization logic doesn't specifically require CockroachDB - it accepts any SQLAlchemy-compatible URL. The error message and parameter documentation should be more generic or explicitly state support for multiple backends.
| @st.cache_resource | ||
| def get_memori_manager(openai_key: str, db_url: str | None) -> MemoriManager: | ||
| return MemoriManager( | ||
| openai_api_key=openai_key, | ||
| db_url=db_url, | ||
| ) |
There was a problem hiding this comment.
When using st.cache_resource, the cached function should be deterministic based on its parameters. The current implementation caches based on openai_key and db_url, but if these values change in the environment variables after the cache is created, the old cached instance will still be returned. This could lead to using stale credentials. Consider using cache_key or implementing proper cache invalidation when settings change.
| if db_url_input: | ||
| os.environ["MEMORI_DB_URL"] = db_url_input | ||
|
|
||
| if openai_api_key_input or memori_api_key_input or db_url_input: |
There was a problem hiding this comment.
The code directly modifies os.environ when the "Save Settings" button is clicked. This doesn't actually persist the values beyond the current session, and more importantly, it doesn't clear the Streamlit cache. After updating environment variables, you should use st.cache_resource.clear() to ensure the MemoriManager is reinitialized with the new values.
| if openai_api_key_input or memori_api_key_input or db_url_input: | |
| if openai_api_key_input or memori_api_key_input or db_url_input: | |
| # Clear cached resources so MemoriManager (and others) pick up new env vars | |
| st.cache_resource.clear() |
| os.environ["MEMORI_DB_URL"] = db_url_input | ||
|
|
||
| if openai_api_key_input or memori_api_key_input or db_url_input: | ||
| st.success("✅ Settings saved for this session") |
There was a problem hiding this comment.
The code stores environment variables directly in os.environ, but the success message says "Settings saved for this session". This is misleading because the settings are not actually persisted anywhere - they only exist in memory for the current Python process. When the Streamlit app restarts, these values will be lost unless they're in the .env file. Consider clarifying this in the message or implementing actual persistence.
| st.success("✅ Settings saved for this session") | |
| st.success( | |
| "✅ Settings applied for the current run. To persist them across " | |
| "restarts, set these values in your environment or a .env file." | |
| ) |
| class MemoriManager: | ||
| """ | ||
| Thin wrapper around Memori + OpenAI client + CockroachDB (via SQLAlchemy). | ||
| Uses a single Cockroach/Postgres-compatible URL for all persistence. | ||
| """ |
There was a problem hiding this comment.
The class docstring states it's a "thin wrapper around Memori + OpenAI client + CockroachDB (via SQLAlchemy)" and mentions "Uses a single Cockroach/Postgres-compatible URL for all persistence". However, the PR description claims support for SQLite, PostgreSQL, MySQL, and MongoDB backends. The documentation should clarify which databases are actually supported and update the docstring to reflect the accurate capabilities rather than only mentioning CockroachDB.
| {"role": "user", "content": question}, | ||
| ], | ||
| ) | ||
| return response.choices[0].message.content |
There was a problem hiding this comment.
The method returns the message content directly without checking for None. While there's a default empty string in line 161, the return statement should handle the case where message.content is None to avoid potential type inconsistencies. Consider adding an explicit check or default value at the return statement.
| return response.choices[0].message.content | |
| return response.choices[0].message.content or "" |
| # Strip leading numbering if present | ||
| if line[0].isdigit(): | ||
| # e.g. "1. Question" | ||
| parts = line.split(".", 1) | ||
| if len(parts) == 2: | ||
| line = parts[1].strip() |
There was a problem hiding this comment.
There's a potential index out of bounds issue when the line starts with a digit but doesn't contain a period separator. If the line is just a number without a period (e.g., "1"), accessing line[0] is safe but splitting on '.' and checking len(parts) == 2 is correct. However, after splitting and reassigning line = parts[1].strip(), if parts[1] is empty, this will result in an empty question string being added. Consider adding validation to ensure the extracted question text is not empty.
| # Strip leading numbering if present | |
| if line[0].isdigit(): | |
| # e.g. "1. Question" | |
| parts = line.split(".", 1) | |
| if len(parts) == 2: | |
| line = parts[1].strip() | |
| # Skip any unexpectedly empty lines to avoid indexing errors | |
| if not line: | |
| continue | |
| # Strip leading numbering if present | |
| if line[0].isdigit(): | |
| # e.g. "1. Question" | |
| parts = line.split(".", 1) | |
| if len(parts) == 2: | |
| candidate = parts[1].strip() | |
| # Only use the candidate if it contains actual question text | |
| if not candidate: | |
| # Skip lines like "1." that have no question content | |
| continue | |
| line = candidate | |
| # Final safety check to avoid creating empty questions | |
| if not line: | |
| continue |
| mgr = memori_mgr | ||
| log: StudyLog = st.session_state.current_log |
There was a problem hiding this comment.
The variable log is retrieved from session state with type annotation StudyLog, but there's no guarantee that st.session_state.current_log exists or is of the correct type. If the user clicks "Evaluate my understanding" without first generating a quiz, this will raise a KeyError. Add proper validation to check if current_log exists in session state.
| mgr = memori_mgr | |
| log: StudyLog = st.session_state.current_log | |
| # Validate that a current study log exists and is of the expected type | |
| if "current_log" not in st.session_state: | |
| st.error("No study session found. Please generate a quiz before evaluating your understanding.") | |
| return | |
| current_log = st.session_state.current_log | |
| if not isinstance(current_log, StudyLog): | |
| st.error("Internal error: current study session data is invalid. Please start a new study session.") | |
| return | |
| mgr = memori_mgr | |
| log: StudyLog = current_log |
| - Generates 3–5 quiz questions. | ||
| - Prompts you to explain the topic “in your own words”. | ||
| - Evaluates understanding (0–100), surfaces feedback, and suggests a next step. | ||
| - Writes a summarised study session into **Memori** (topic, score, difficulty, mood, feedback, next step). |
There was a problem hiding this comment.
The word "summarised" uses British spelling. For consistency, especially in a technical document with an American English context (based on other language usage in the codebase), consider using "summarized" instead.
| - Writes a summarised study session into **Memori** (topic, score, difficulty, mood, feedback, next step). | |
| - Writes a summarized study session into **Memori** (topic, score, difficulty, mood, feedback, next step). |
| adapter = getattr(self.memori.config.storage, "adapter", None) | ||
| if adapter is not None and hasattr(adapter, "commit"): | ||
| adapter.commit() | ||
| except Exception: |
There was a problem hiding this comment.
'except' clause does nothing but pass and there is no explanatory comment.
| except Exception: | |
| except Exception: | |
| # Non-fatal; Memori should still persist in most configurations. |
🔗 Linked Issue
Closes #
✅ Type of Change
📝 Summary
📖 README Checklist
README.mdfile for my project.README.mdfollows the official.github/README_TEMPLATE.md.README.md.assetsfolder and included it in myREADME.md.✔️ Contributor Checklist
advance_ai_agents,rag_apps).requirements.txtorpyproject.tomlfor dependencies..env.examplefile if environment variables are needed and ensured no secrets are committed.💬 Additional Comments
EntelligenceAI PR Summary
This PR adds a complete AI Study Coach application with Memori v3 integration for persistent learning memory and LangGraph-powered quiz workflows.
MemoriManagerclass supporting SQLite, PostgreSQL, MySQL, and MongoDB backends with automatic fallback