An Obsidian plugin that uses AI-powered chunk extraction and review with the SM2 spaced repetition algorithm to help you memorize your notes.
- Intelligent & Template-Based Extraction: Automatically extracts knowledge chunks from your notes using Large Language Models (LLM)
- Create custom extraction templates with specific requirements to guide how chunks are extracted (e.g., focus on key concepts, prioritize actionable insights)
- Select templates before extraction - your selection is remembered for convenience
- LLM understands context and semantic meaning, not just text matching
- Incremental Updates: When you modify a note, the plugin intelligently updates existing chunks:
- Keep: Preserves chunks that remain unchanged
- Modify: Updates chunks with changed content (with minor/moderate/major update levels)
- Delete: Removes chunks for content that no longer exists
- Create: Adds new chunks for new content
- Markdown Support: Chunks support full markdown rendering including code blocks, math formulas, tables, and more
- SM2 Spaced Repetition: Implements the proven SuperMemo 2 algorithm for optimal review scheduling
- Adaptive Scheduling: Review intervals automatically adjust based on your performance
- Importance Multiplier: High-importance chunks are reviewed more frequently
- Familiarity Tracking: Weighted average of past grades (0.0-1.0) tracks your mastery level
- Intelligent Scoring: Chunks are scored based on:
- Importance level (low, medium, high)
- Familiarity score (how well you know it)
- Due date (past due chunks get priority)
- Automatic Scheduling: Top-scoring chunks are automatically scheduled for review
- Configurable Threshold: Set a minimum score threshold for push recommendations
- Auto Cleanup: Expired and completed pushes are automatically removed
- AI Conversation Evaluation:
- Engage in conversations with AI to evaluate your understanding
- Template-Based Conversations: Use custom review templates to guide AI tutor behavior
- Define how AI should ask questions (e.g., test deep understanding, provide detailed explanations)
- Select templates before starting conversation
- Templates are remembered for next conversation
- AI generates relevant questions based on chunk content and template requirements
- Adaptive responses and feedback
- End conversations early to get immediate AI evaluation
- Automatic grading (0-5 scale)
- Markdown Support: Conversation messages support full markdown rendering
- Manual Grading:
- Simple and intuitive 5-star rating system
- Quick evaluation without AI interaction
- Flexible Workflow: Choose between AI conversation or manual grading for each push
- Tabbed Interface: Switch between manual and AI evaluation modes
- Importance Control: Set importance level (1-3 stars) for each chunk
- Review Toggle: Enable/disable review for specific chunks
- Filter by Review Status: Toggle to hide chunks that don't need review
- Chunk Deletion: Remove unwanted chunks
- Automatic Cleanup: Chunks for deleted notes are automatically removed
- Detailed Metrics: View creation date, familiarity score, review interval, repetition count, and chunk score
- Template Selection: Choose extraction templates before extracting chunks
- Download the latest release from the Releases page
- Extract the plugin folder to your Obsidian vault's
.obsidian/plugins/directory - Open Obsidian and go to Settings β Community Plugins
- Enable the Memo AI plugin
- Clone or copy this repository to
.obsidian/plugins/ai_notebook_plugin/ - Open a terminal in the plugin directory
- Run
npm installto install dependencies - Run
npm run buildto build the plugin - Enable the plugin in Obsidian settings
-
Configure LLM Settings: Go to Settings β Memo AI β LLM Settings and enter your API key
- Supports OpenAI API and compatible APIs (e.g., Alibaba Cloud DashScope)
- Configure API base URL, model name, and timeout
-
Extract Chunks from a Note:
- Open a note you want to review
- Open "Note Chunks" view
- (Optional) Select an extraction template from the dropdown to guide chunk extraction
- Click "Extract Chunks" button
- The plugin will use LLM to intelligently extract knowledge chunks based on your template requirements
- Your template selection is remembered for next extraction
-
Review Chunks:
- Open "Push Center" view (automatically opens on plugin load)
- Click "Refresh Pushes" to schedule new pushes
- Select a push to review
- For AI conversation: (Optional) Select a review template before starting
- Choose between AI conversation or manual grading
The Note Chunks view shows all chunks extracted from the current note:
- Template Selection: Choose an extraction template from the dropdown before extracting
- Templates define custom requirements for chunk extraction
- Your selection is remembered for next extraction
- Select "No requirements" for default extraction behavior
- Hide No-Review Toggle: Toggle switch to hide chunks that don't need review
- Extract Chunks: Extract or update chunks from the current note (purple button)
- Importance Rating: Click stars to set importance (1-3 stars)
- Review Toggle: Toggle whether a chunk needs review
- Delete: Remove a chunk
- View Details: See familiarity score, review interval, repetition count, and chunk score
- Markdown Rendering: Chunk content supports full markdown including code blocks, math formulas, tables, etc.
The Push Center is your main review interface:
- Push List: Sidebar showing all active pushes (collapsible)
- Push Details:
- Chunk content and metadata (with markdown rendering)
- Due time and creation time
- Open note or delete push buttons
- Template Selection: For AI conversation, select a review template before starting
- Templates guide how the AI tutor asks questions and evaluates responses
- Your selection is remembered for next conversation
- Template cannot be changed after conversation starts
- Evaluation Options:
- Manual Grading: 5-star rating system with submit button
- AI Conversation: Interactive Q&A with AI tutor (supports markdown in messages)
- Refresh Pushes: Automatically schedules new pushes based on chunk scores
- (Optional) Select a review template from the dropdown to guide AI behavior
- Click "Start" to begin the conversation
- AI generates a question based on the chunk and template requirements
- Type your answer in the input box (supports markdown)
- AI evaluates your response and provides feedback
- Continue the conversation or click "End Conversation" to get immediate evaluation
- SM2 algorithm updates based on your performance
- All conversation messages support markdown rendering
- Select a star rating (1-5 stars)
- Click "Submit Grade" to apply
- SM2 algorithm updates based on your grade
- LLM API Key: Your OpenAI API key or compatible API key
- LLM API Base URL: API endpoint (default:
https://api.openai.com/v1)- For Alibaba Cloud DashScope:
https://dashscope.aliyuncs.com/compatible-mode/v1
- For Alibaba Cloud DashScope:
- LLM Model: Model name (e.g.,
gpt-3.5-turbo,gpt-4) - LLM Request Timeout: Maximum time to wait for LLM responses (10-300 seconds, default: 60)
Create custom templates to guide how chunks are extracted from your notes:
- Add Template: Click "+ Add Template" to create a new template
- Template Name: Give your template a descriptive name (e.g., "Focus on key concepts")
- Requirements: Describe your requirements for chunk extraction
- These requirements will be sent to the LLM when extracting chunks
- Example: "Focus on extracting key concepts and definitions. Prioritize actionable insights."
- Edit Template: Click "Edit" to modify an existing template
- Delete Template: Click "Delete" to remove a template
- Template Management: Templates are saved and persist across sessions
- Last Selected: Your last selected template is remembered for convenience
Create custom templates to guide AI conversation review behavior:
- Add Template: Click "+ Add Template" to create a new template
- Template Name: Give your template a descriptive name (e.g., "Deep understanding focus")
- Requirements: Describe how the AI tutor should ask questions and evaluate responses
- These requirements guide the AI's teaching style
- Example: "Ask questions that test deep understanding. Provide detailed explanations when the learner struggles."
- Edit Template: Click "Edit" to modify an existing template
- Delete Template: Click "Delete" to remove a template
- Template Management: Templates are saved and persist across sessions
- Last Selected: Your last selected template is remembered for convenience
- Max Active Pushes: Maximum number of pushes active at the same time (1-20, default: 5)
- Push Due Window: Duration of a push in hours before expiration (1-168 hours, default: 24)
- Push Score Threshold: Minimum chunk score required for push recommendation (2.0-6.0, default: 2.0)
- Extract chunks from current note: Extract or update chunks from the active note
- View chunks for current note: Open the Note Chunks view
- Open push center: Open the Push Center view
- Initial Extraction: When you first extract chunks, LLM analyzes the note and creates knowledge chunks
- Incremental Updates: When you modify a note and extract again:
- LLM compares new content with existing chunks
- Determines which chunks to keep, modify, or delete
- Creates new chunks for new content
- For modified chunks, determines update level (minor/moderate/major)
- Update Level Impact:
- Minor: Slight reduction in familiarity and EF
- Moderate: Moderate reduction, one repetition removed
- Major: Significant reduction, repetitions reset to 0
The plugin uses the SuperMemo 2 algorithm with the following parameters:
- E-Factor (EF): Ease factor (1.3-2.5) representing memory difficulty
- Repetitions: Number of successful reviews
- Interval: Days until next review
- Familiarity Score: Weighted average of past grades (0.0-1.0)
Review Intervals:
- First review: 1 day
- Second review: 6 days
- Subsequent reviews: Previous interval Γ EF (adjusted by importance)
Grade Impact:
- Grade < 3: Reset to beginning (repetitions = 0, interval = 1 day)
- Grade β₯ 3: Increase repetitions, calculate new interval
Chunks are scored using a formula that considers:
- Importance Weight: Higher importance = higher base score
- Familiarity Boost: Lower familiarity = higher priority
- Due Date Boost: Past due chunks get exponential boost
- Future Penalty: Future due chunks get slight penalty
Only chunks with scores above the threshold are recommended for pushing.
- Deleted Notes: When a note is deleted, all associated chunks and pushes are automatically removed
- Expired Pushes: Pushes past their due window are removed when refreshing
- Completed Pushes: Completed pushes are removed when refreshing
- Start Small: Begin with a few notes to understand the workflow
- Use Templates: Create templates for different types of content
- Create a "Key Concepts" template for extracting important definitions
- Create a "Deep Understanding" template for AI conversations that test comprehension
- Create a "Quick Review" template for faster, surface-level questions
- Set Importance: Mark important chunks with higher importance for more frequent review
- Regular Reviews: Use "Refresh Pushes" regularly to keep your review queue active
- AI vs Manual: Use AI conversation for complex topics, manual grading for quick reviews
- Template Selection: Select templates before extraction/conversation - your choice is remembered
- Update Threshold: Adjust push score threshold based on your review capacity
- LLM Model: Use GPT-4 for better chunk extraction quality (if available)
- Markdown Support: Use markdown in your notes - chunks will render code blocks, math formulas, and more beautifully
- Filter Chunks: Use the "Hide no-review" toggle to focus on chunks that need attention
- Check your LLM API key is correct
- Verify API base URL is correct for your provider
- Check network connection
- Increase timeout if using slower models
- Click "Refresh Pushes" to schedule new pushes
- Check push score threshold isn't too high
- Ensure chunks have
needsReviewenabled - Verify chunks have valid due dates
- Check LLM settings are configured correctly
- Verify API key has sufficient credits/quota
- Check timeout setting is appropriate
# Install dependencies
npm install
# Build for development (with watch mode)
npm run dev
# Build for production
npm run buildMIT
Jiawei Yang



