Skip to content

Releases: saylordotorg/moodle-local_ai_course_assistant

v1.9.1 — Faster Responses, Customizable Starters, Admin UX

10 Mar 00:06

Choose a tag to compare

What's New in v1.9.1

Performance Optimizations (v1.9.0-v1.9.1)

  • Default provider changed to OpenAI (gpt-4o-mini) for faster time-to-first-token
  • Default Claude model changed to claude-haiku-4-5 (was claude-sonnet-4-5)
  • New Max Response Length setting — default 1024 tokens (~1-2 paragraphs), configurable in admin. Set to 0 for no limit.
  • RAG enabled by default for new installations — sends only relevant content chunks instead of full course content, reducing prompt size and response latency

Customizable Conversation Starters (v1.8.0-v1.8.2)

  • Admin settings page for managing conversation starter chips — add, remove, reorder with drag-and-drop
  • Each starter has: name, description, AI prompt/instructions, icon (16 built-in), and type (prompt, quiz, voice, pronunciation)
  • Three-tier configuration: built-in defaults → global admin overrides → per-course enable/disable
  • {page} placeholder in prompts auto-replaced with current page title
  • Conditional visibility — voice/pronunciation starters only appear when API keys are configured
  • 5 default starters: Help With This Page, Quiz Me, Study Plan, Ask Anything, Review & Practice

Voice Practice Improvements (v1.8.3)

  • Spoken intro messages when starting Practice Speaking or Pronunciation Practice
  • Practice Speaking: TTS greeting explains the exercise and prompts topic selection
  • ELL Pronunciation: AI greets the student and explains what to do before expecting input

Admin UX (v1.8.1-v1.8.2)

  • Reordered global settings — AI Provider & Conversation at top, then Starters, RAG, Token Analytics (most-used first)
  • Save buttons at top and bottom of course settings and starter settings pages
  • Token Analytics link added to global settings
  • Removed duplicate per-course voice toggles (now controlled via starter overrides)
  • Fixed starter settings page blank page bug (incorrect require path)

Conversation Starter Redesign (v1.7.0)

  • Streamlined from 6 to 5 focused starters with improved AI prompts
  • Updated 43-language translations for new starter labels

Bug Fixes (v1.6.1)

  • Fixed Practice Speaking mic activation
  • Fixed SOLA_NEXT suggestion tag leak in displayed text

Full Changelog: v1.1.0...v1.9.1

v1.1.0 — Token Usage & Cost Analytics

07 Mar 01:26

Choose a tag to compare

What's new

Token Usage & Cost Analytics (admin)

  • New admin page: Token Usage & Cost (link from Analytics dashboard)
  • Tracks prompt tokens, completion tokens, and model name per AI response
  • Cost estimation using rate cards for OpenAI, Anthropic Claude, and DeepSeek
  • Summary cards: total responses, tokens, and estimated cost
  • Cost by Model table — breakdown by provider and model variant
  • Cost per Student table — top 100 students by token usage
  • Missing data audit — flags responses recorded before v1.1.0 (no token data)
  • Rate card reference — shows all known model pricing in the admin UI

Under the hood

  • OpenAI providers: stream_options: {include_usage: true} captures usage from final SSE chunk
  • Claude provider: captures input_tokens from message_start, output_tokens from message_delta
  • New prompt_tokens, completion_tokens, model_name columns on messages table (nullable, backward-compatible)
  • New classes/token_cost_manager.php with prefix-matched rate card and adaptive cost formatting

Also in this release

  • v1.0.20: Welcome screen redesign — highlights SOLA's unique features (voice, quizzes, study plans) to increase student engagement; voice features shown conditionally based on course config
  • v1.0.21: Fix [SOLA_NEXT] tags appearing raw in Practice Speaking transcript (tag spans many SSE tokens; fixed with display buffer + indexOf approach)

v1.0.21 — Fix Practice Speaking transcript (SOLA_NEXT tags visible)

06 Mar 20:12

Choose a tag to compare

Bug fix

Practice Speaking: [SOLA_NEXT] tags no longer appear in voice transcript

The transcript display was showing raw [SOLA_NEXT]Ask about...|...[/SOLA_NEXT] text because the tag spans many streaming tokens and a single-token regex strip never matched.

Root cause: stripSolaTags(token) was called on each individual SSE token (e.g. [, SOLA, _NEXT, ]...) so the full [SOLA_NEXT]...[/SOLA_NEXT] pattern was never present in one call.

Fix: Tokens are now accumulated in a displayBuffer. Only text before the first [SOLA_NEXT] marker is forwarded to the transcript display, using a delta offset to avoid re-emitting already-displayed text. TTS was already handled correctly via the accumulated sentenceBuffer.

v1.0.20 — Updated welcome screen with SOLA's full feature set

06 Mar 20:09

Choose a tag to compare

What's new

Welcome screen redesign

First-time users now see a clearer picture of what makes SOLA different from a generic chatbot:

  • Subtitle: "Your personal study coach — not just another chatbot."
  • Focused study sessions — pick a topic & time limit, SOLA guides step by step
  • Adaptive quizzes — tracks scores and focuses practice where students need it most
  • Practice Speaking (shown only when TTS is enabled) — real voice conversations
  • Pronunciation Practice (shown only when enabled for the course) — instant coaching
  • Personalised study plan — maps a realistic path based on the student's schedule
  • Ask me anything — explanations, examples, and guidance 24/7
  • CTA updated: "Let's get started →"

Voice feature items are shown conditionally based on what's enabled for the course.

v1.0.19 — Fix numbered lists, voice overlay prompts, rename Pronunciation Practice

06 Mar 17:22

Choose a tag to compare

What's new

Bug fix: Numbered lists no longer reset to 1

AI responses with blank lines between numbered list items (common in GPT output) now render correctly — the markdown parser uses peek-ahead to keep the list open across blank lines rather than closing and reopening <ol> on each item.

Voice overlay: written instructions replace spoken greetings

Both voice modes now show a written prompt in the overlay instead of playing a spoken greeting (which caused 1–4 s of latency before the mic was active):

  • Practice Speaking: "Start speaking — SOLA will listen and respond."
  • Pronunciation Practice: "Speak a word or sentence to practice your pronunciation. SOLA will give you feedback."

Chip renamed: ELL Pronunciation → Pronunciation Practice

More intuitive label that better describes what the tool does.

Voice chip order: Practice Speaking → Pronunciation Practice → Study Plan

Consistent ordering for voice-focused chips.

v1.0.18 — Reduced ELL & Practice Speaking latency

06 Mar 16:55

Choose a tag to compare

What's new in v1.0.18

Latency improvements

ELL Pronunciation

  • Removed the upfront AI-generated greeting — mic is now hot as soon as the WebSocket session is ready (~500–1000ms sooner). Just start speaking.

Practice Speaking

  • Removed the spoken greeting ("What would you like to talk about?") — mic starts immediately instead of waiting for TTS fetch + ~3s of audio

Both features

  • VAD silence detection: 1500ms → 750ms (response triggers faster after you stop speaking)
  • Recognition restart gap after TTS: 300ms → 50ms (much snappier turn-around between AI and user)
  • Token fetch timeout: 15s → 5s (faster failure on connectivity errors)

No schema changes.

v1.0.17 — Enhanced Quick Study + Personalised Welcome Back

06 Mar 16:07

Choose a tag to compare

What's new in v1.0.17

Quick Study

  • Learning objectives and module topics sub-selectors (matching Quiz Me)
  • Last 3 sessions shown at top with Resume buttons

Personalised Welcome Back

  • Proper assistant message on re-open (name, last topic, days ago, study stats, quiz score) instead of a bare chip
  • Action chips: Continue / Show my progress / Start something new

Progress Tracking

  • Show my progress chip opens a dedicated panel: study sessions + quiz history + clear button
  • Settings panel now includes a My Progress section (last 5 sessions + quizzes)
  • Continue/Show my progress/Start something new chips are handled as special actions (not sent as messages)

No schema changes.

v1.0.16 — TTS fix, typewriter animation, Quick Study chip

06 Mar 15:44

Choose a tag to compare

What's new in v1.0.16

Bug Fix: TTS Audio on iOS Chrome

  • Fixed: speaker button produced no audio on iOS Chrome
  • Root cause: audio.play() after fetch() loses user-gesture context on iOS
  • Fix: unlock a shared AudioContext synchronously on button click; decode and play via decodeAudioData — always succeeds on iOS

Typewriter Animation

  • Chat responses now appear character-by-character as if typed out
  • Smooth scroll keeps the latest revealed text visible without snapping to bottom
  • Works on top of SSE streaming

Avatar Glow (replaces bounce)

  • Replaced vertical bounce animation with a soft pulse/glow ring on the toggle button while TTS is playing

Mobile Swipe Handle

  • Added a dedicated drag-handle bar above the header (mobile only)
  • Swipe-to-close now only triggers from that handle — no more accidental dismissal when scrolling messages

Quick Study Chip

  • New starter chip: Quick Study — now the first chip (top-left)
  • Time picker: 5 / 10 / 15 / 30 min
  • Topic selector: AI-guided (default), current page, course topics, or custom
  • Sends a focused timed study prompt to SOLA
  • Tracks study sessions in localStorage

Personalization

  • Records the current page/topic when you send a message
  • Shows a 'Continue where you left off' suggestion chip on re-open (within 7 days)
  • Surfaces your last quiz score for the same topic in the welcome-back context

v1.0.15 — iOS drag fix, full 25-language translations, HTTPS voice guard

06 Mar 07:41

Choose a tag to compare

What's new

v1.0.15 — HTTPS guard for voice features

  • ELL Pronunciation and Practice Speaking now show a clear "HTTPS required" message on iOS Chrome (and any HTTP origin) instead of the cryptic "getUserMedia not supported" error. Voice features work normally on HTTPS.

v1.0.14 — Fix avatar drag on iOS mobile Chrome

  • The floating SOLA avatar button can now be dragged to reposition it on iOS Chrome.
  • Two bugs were fixed: the drag listener was never attached on mobile-width viewports, and iOS Chrome requires setPointerCapture() for reliable pointer event delivery during touch drags.

v1.0.13 — Fix accidental avatar movement on iOS

  • Pressing and holding the blue header bar no longer causes the round avatar button to float away from the drawer on iOS Chrome. The root cause was that on mobile the drawer is position:fixed independently of the root element, so drag events from the header only moved the toggle button.

v1.0.12 — Full 25-language translation pass

  • Added ~57 missing student-facing strings to all 25 non-English language files: quiz UI, conversation starters (Quiz Me, Explain This, Key Concepts, Study Plan, AI Help, Practice Speaking, ELL Pronunciation, AI Coach), topic picker, voice mode status messages, and settings panel labels.
  • Updated chat:greeting in all languages to use {$a} (student first name) and SOLA branding.
  • Corrected chat:title and chat:assistant to 'SOLA' in all languages (was "Tuteur IA", "AI Tutor", etc.).
  • Updated STARTER_LABELS in speech.js with ellPronunciation and aiCoach entries for all 43 supported languages.

v1.0.11 (previous release)

  • Language chip in hint bar for in-widget language switching
  • ELL connection error fixes and ~300ms response speed improvement

v1.0.9 — Mobile voice support, click/drag fixes, ELL intro

05 Mar 21:05

Choose a tag to compare

What's new

Bug fixes

  • Practice Speaking chip closes chat — fixed root click handler using composedPath() so chips removed from the DOM during event propagation no longer trigger the drawer to close
  • Avatar widget snaps back after drag — clicking the toggle no longer resets the widget position; inline styles are only cleared when no saved drag position exists
  • Avatar widget not opening — excluded the toggle button from the outside-click handler so clicking it reliably opens/closes the drawer

Mobile support

  • Practice Speaking on iOS/Chrome — added MediaRecorder + OpenAI Whisper fallback (transcribe.php) when the Web Speech API is unavailable; works on iOS Chrome, Firefox, and other browsers without SpeechRecognition
  • ELL Pronunciation on iOS — microphone is now requested inside the user-gesture click handler and passed directly to the WebSocket session, satisfying iOS/WKWebView's requirement that getUserMedia be initiated synchronously during a user gesture

ELL Pronunciation improvement

  • Spoken greeting — SOLA now introduces herself at the start of every ELL session: "I can help you with your pronunciation. Speak a word or sentence and I'll help!"

Installation

Upload ai_course_assistant.zip via Site admin → Plugins → Install plugins.