Onboarding banner changes#5
Conversation
- Add PrefilterResult and QueryDecision enums with QueryRouteResult class
- Implement IntentPrefilter with two-stage keyword/regex classification
- Emergency patterns always evaluated first (cardiac, respiratory,
neurological, self-harm, toxicological, allergic, severe bleeding)
- Separate personal lab references from bare biomarker mentions so
educational queries ("what is cholesterol?") are not blocked for
users without imported reports
- Short/ambiguous lab terms (alt, ast, hb, iron, etc.) use word-boundary
regex to prevent substring false positives
- Medical unit pattern restricted to compound units (mg/dl, mmol/l, etc.)
to avoid false positives from "100% sure", "5g network", "500ml water"
- Hinglish possessives (mera, meri, mere, apna, apni) and question
patterns (kitna hai, kya hai mera) supported for Indian user base
- Add ResponseTemplates with deterministic copy for emergency escalation, off-topic refusal, and missing-lab-data guidance — these bypass the LLM entirely so response is instant and guaranteed safe - Add QueryRouter combining prefilter result with lab-data availability to produce typed QueryRouteResult; class (not static) so Stage 2 can inject embedding classifier as constructor dependency - Split AiPrompts into labAnalysisPrompt (with citation and lab-data instructions) and generalHealthPrompt (educational, no assumed values); systemPrompt kept as backward-compatible alias
- Insert QueryRouter call in _sendMessage before context building; deterministic decisions (emergency, off-topic, missing-lab) return instantly without calling the LLM or building context - Add _addDeterministicResponse helper for non-streaming assistant messages with mounted guard and session persistence - Skip context building entirely for answerGeneralHealth decisions to avoid contradictory prompt/context signals and save semantic search - Pass systemPromptOverride to generateResponse so lab-context queries use labAnalysisPrompt and general health queries use generalHealthPrompt - Add systemPromptOverride parameter to llm_service generateResponse and _formatPrompt; null falls through to default systemPrompt unchanged - Wrap routing block in try/catch so ObjectBox errors fall back to answerGeneralHealth instead of crashing _sendMessage
- Move all user-facing error messages, generation state labels, prompt formatting labels, and persistence warnings from llm_service and chat_screen into lib/constants/llm_strings.dart - No hardcoded copy remains in service or screen files — all text that could appear in the UI lives in one reviewable, translatable location
- add StrictnessMode enum with build-time kPolicyMode constant - add TokenBudgets with explicit ceilings for all context slots - add IntentExamples with 45 canonical queries for embedding classifier - add ValidationStrings with fallback messages for output validation
- add IntentClassifier for Stage 2 embedding-based classification using cosine similarity against pre-computed category centroids - add OutputValidator with 5 post-generation quality checks: emptiness, hallucination, repetition, prohibited content, length
- make QueryRouter async, accept IntentClassifier + strictness mode - add confidence field to QueryRouteResult - add conversation history (2-turn) to LLM prompt via ChatHistoryTurn - integrate OutputValidator in chat screen onDone callback - derive ChatContextBuilder budget from TokenBudgets - add chest pressure/tightness to emergency patterns - add history role labels and routing log string to LlmStrings
- 34 tests for IntentPrefilter: emergency, lab, off-topic, health, ambiguous, Hinglish, and false positive prevention - 18 tests for QueryRouter: full routing matrix including spec eval cases - 21 tests for OutputValidator: emptiness, hallucination, repetition, prohibited content, length, and fallback application
…ts cleanup - add garbled ValidationResult with three heuristics: non-ASCII ratio, avg word length, vowel-word ratio - add centroid disk caching to IntentClassifier with version-keyed invalidation via intent_centroids.json - extract magic numbers from OutputValidator to TokenBudgets - extract debug strings from IntentClassifier to LlmStrings - add garbledFallback to ValidationStrings
- accept conversationHistory in QueryRouter.route() for context-aware intent resolution - ambiguous follow-ups after lab queries now promote to lab intent (e.g. "is that normal?" after "show my creatinine") - likelyHealthQuery with hasLabData=true now routes to answerWithLabContext for richer answers with actual values
- extract _startGeneration() with retry support: retries once on empty, garbled, or error before showing fallback - pass conversation history to router for context-aware routing - add garbled detection tests (non-ASCII, consonant gibberish, long words) and fallback test - add history-aware routing tests (lab follow-up, no-history, non-lab history) - add health+lab enrichment test
…nloads - add HfTokenService for persisting HF access tokens via SharedPreferences - add authToken parameter to ModelDownloader.download() with Bearer header - wire token injection into LlmService and LlmEmbeddingService download flows - add HF token section in Settings with input field, save/clear, and info banner explaining common access issues (license acceptance, read permissions) - add all UI strings to LlmStrings (no hardcoded copy)
- extract SettingsSectionHeader, SettingsRow, ModelPickerSection, EmbeddingModelTile, HfTokenTile, and CustomModelDialog into individual files under lib/widgets/settings/ - add barrel export via settings_widgets.dart - settings_screen.dart reduced from ~1100 to ~300 lines (composition only)
- Separate ObjectBox lookup from query routing so storage errors no longer bypass emergency/off-topic detection (safety-critical) - Guard Bearer token to trusted Hugging Face hosts only, preventing token leakage to arbitrary download URLs - Tighten personal-lab regex patterns to require medical terms after possessive pronouns, fixing false positives like "is my code correct" - Make QueryRouter resolve the intent classifier lazily via getter so Stage 2 embedding classification activates once loaded, not only if ready at screen init - Wrap HNSW query in try/finally to prevent resource leak on error - Fix emergency number to list both 911/112 for multi-region support - Correct "HuggingFace" brand spelling to "Hugging Face" - Remove unused routingFailedLog constant
Stage 2 classifier results for emergencyDetected were grouped with ambiguous/health intents and routed to answerGeneralHealth. If Stage 1 regex missed an emergency but Stage 2 embeddings caught it, the user would get a general health response instead of the safety escalation message with emergency contact numbers.
The info banner used secondaryContainer which rendered as a blue box that clashed with the app's neutral grey theme. Switch to surfaceContainerLow background with onSurfaceVariant text and icon.
Replace plain icon-in-circle banners with rich composed illustrations: - Welcome: phone mockup with shield, floating "DATA PRIVACY" and "OFFLINE MODE" badges, "PRIVACY REIMAGINED" pill - How it Works: document card with PDF header, magnifying glass overlay, feature list with icon rows (parsing, extraction, offline) - Privacy: card with verified shield, lock indicators on data rows, floating "LOCAL STORAGE" and "ENCRYPTED" badges Update copy to match Stitch mockup text, add contextual CTA labels per page, and bottom badge chips for trust signals.
📝 WalkthroughWalkthroughThis PR introduces a multi-stage query routing and intent classification system for an LLM-based health assistant. It adds deterministic prefiltering, optional embedding-based classification, structured routing decisions, comprehensive output validation with token budgeting, HF token management, and integrations throughout the chat and settings UI layers. Changes
Sequence DiagramsequenceDiagram
participant User
participant ChatScreen
participant QueryRouter
participant IntentPrefilter
participant IntentClassifier
participant LlmService
participant OutputValidator
participant ResponseDB
User->>ChatScreen: Send query
ChatScreen->>ChatScreen: Build conversation history
ChatScreen->>ResponseDB: Check for lab data
ChatScreen->>QueryRouter: route(message, hasLabData, history)
QueryRouter->>IntentPrefilter: classify(message)
IntentPrefilter-->>QueryRouter: PrefilterResult
alt Emergency or Off-Topic
QueryRouter-->>ChatScreen: QueryRouteResult (deterministic response)
ChatScreen->>ChatScreen: Show response, requiresLlm=false
else Ambiguous Intent
QueryRouter->>IntentClassifier: classify(message)
IntentClassifier-->>QueryRouter: ClassificationResult (confidence)
alt High Confidence
QueryRouter-->>ChatScreen: QueryRouteResult (decision, requiresLlm)
else Low Confidence
QueryRouter-->>ChatScreen: QueryRouteResult (askClarifyingQuestion)
ChatScreen->>ChatScreen: Show clarification, requiresLlm=false
end
else Lab/Health Query
QueryRouter-->>ChatScreen: QueryRouteResult (decision + system prompt override)
end
alt requiresLlm=true
ChatScreen->>LlmService: generateResponse(message, context, history, systemPromptOverride)
LlmService-->>ChatScreen: LLM output
ChatScreen->>OutputValidator: validate(output, labContext)
OutputValidator-->>ChatScreen: ValidationResult
alt Validation Failed & Retries Remaining
ChatScreen->>LlmService: generateResponse(...) [retry]
else Validation Failed & Max Retries Exceeded
ChatScreen->>OutputValidator: applyFallback(result, output)
OutputValidator-->>ChatScreen: Fallback text
else Validation Passed
ChatScreen->>ChatScreen: Append citations if docs exist
end
end
ChatScreen->>User: Display response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (10)
lib/screens/onboarding_screen.dart (1)
111-115: Consider coupling button labels with page data.The
buttonLabelsarray must stay in sync with_pages.length. If a page is added or removed, forgetting to update this array will cause an index-out-of-bounds crash at runtime.♻️ Suggested refactor: move label into _OnboardingPageData
class _OnboardingPageData { final Widget illustration; final String title; final String subtitle; final List<String>? badges; final List<_FeatureItem>? featureItems; + final String buttonLabel; const _OnboardingPageData({ required this.illustration, required this.title, required this.subtitle, + required this.buttonLabel, this.badges, this.featureItems, }); }Then in the build method:
- final buttonLabels = [ - 'See How it Works', - 'Next: Privacy First', - 'Start My Journey', - ]; ... - Text( - buttonLabels[_currentPage], + Text( + _pages[_currentPage].buttonLabel,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/screens/onboarding_screen.dart` around lines 111 - 115, buttonLabels is a separate array that must match _pages.length, which causes a runtime index error if pages change; move the label into the page model and stop using the separate buttonLabels list by adding a String label field to _OnboardingPageData (e.g., label), populate each _pages entry with its label, and update the build method to read the button text from _pages[currentIndex].label (or similar) instead of buttonLabels to guarantee labels stay coupled to page data.lib/widgets/settings/embedding_model_tile.dart (1)
104-131: Consider adding spacing between action buttons.When multiple buttons could theoretically be visible (though states are mutually exclusive), they would render adjacent without spacing. Consider wrapping with spacing or using
Wrapwithspacingfor cleaner layout if state transitions could briefly show multiple elements.💡 Optional: Add spacing between buttons
Row( mainAxisAlignment: MainAxisAlignment.end, + spacing: KoshikaSpacing.sm, children: [ if (modelInfo.canDownload)Note:
Row.spacingrequires Flutter 3.27+. Alternatively, wrap buttons inPaddingor useWrap(spacing: ...).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/widgets/settings/embedding_model_tile.dart` around lines 104 - 131, The Row containing the action buttons (the block using Row with FilledButton.icon, FilledButton.tonal and OutlinedButton) can render adjacent buttons during quick state transitions; add spacing between the buttons by replacing or wrapping the children with a spacing solution (e.g., use Wrap(spacing: <n>, runSpacing: <n>) around the buttons or insert SizedBox(width: <n>) paddings between the FilledButton.icon, FilledButton.tonal and OutlinedButton) so the buttons for modelInfo.canDownload / modelInfo.canLoad / modelInfo.isUsable never appear glued together; keep the conditional checks (modelInfo.status...) for the CircularProgressIndicator unchanged.lib/services/llm_embedding_service.dart (1)
61-62: Update stale comment to match current behavior.Line 61 still says no HF token is needed, but Line 84 and Line 88 now fetch and pass
authToken. Please sync the section comment to avoid future maintenance confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/llm_embedding_service.dart` around lines 61 - 62, The section comment stating "DOWNLOAD — no HF token needed" is now inaccurate because the code fetches and passes authToken (variable authToken) later; update that comment to reflect that an auth token may be used (e.g., "DOWNLOAD — may use HF authToken when available") so it matches the behavior where authToken is fetched/passed in the download flow.lib/services/llm_service.dart (1)
360-370: Consider enforcing the max history length mentioned in the docs.The doc comment on line 272-273 states "up to 2 prior turns (1 user + 1 assistant)", but
_formatPromptinjects all turns fromconversationHistorywithout enforcement. If a caller passes more turns, prompt length could grow unbounded, potentially exceeding context limits or degrading model performance.♻️ Suggested fix to enforce history limit
// Prior conversation turns (max 2: 1 user + 1 assistant) if (conversationHistory != null) { - for (final turn in conversationHistory) { + // Take only the last 2 turns to stay within documented limit + final recentHistory = conversationHistory.length > 2 + ? conversationHistory.sublist(conversationHistory.length - 2) + : conversationHistory; + for (final turn in recentHistory) { final role = turn.isUser ? LlmStrings.historyUserRole : LlmStrings.historyAssistantRole;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/llm_service.dart` around lines 360 - 370, The _formatPrompt function currently iterates over all entries in conversationHistory; change it to only include up to the last two turns (one user + one assistant) to enforce the doc comment. When building the history block in _formatPrompt, take the tail of conversationHistory (e.g., last up to 2 items) rather than the entire list, preserve the original chronological order when writing each turn, and then write roles using LlmStrings.historyUserRole / LlmStrings.historyAssistantRole as before; this ensures the prompt’s history section is bounded and prevents unbounded prompt growth.lib/widgets/settings/settings_row.dart (1)
60-66: Consider using theme typography for the subtitle.The subtitle uses a hardcoded
fontSize: 12while the title relies onTextStyleproperties. For consistency with the design system, consider using theme typography.♻️ Optional: Use theme typography
if (subtitle != null) Text( subtitle!, - style: const TextStyle( - fontSize: 12, - color: AppColors.textSecondary, - ), + style: Theme.of(context).textTheme.bodySmall?.copyWith( + color: AppColors.textSecondary, + ), ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/widgets/settings/settings_row.dart` around lines 60 - 66, The subtitle Text uses a hardcoded TextStyle (fontSize: 12) — replace it with the app theme typography (e.g., Theme.of(context).textTheme.bodySmall or subtitle2) to ensure consistency; update the Text call in settings_row.dart (the Text(subtitle!, style: const TextStyle(...)) line) to use a non-const style pulled from Theme.of(context).textTheme and, if needed, merge in the existing color (AppColors.textSecondary) via .copyWith(color: ...) so you preserve color while adopting theme typography.lib/services/model_downloader.dart (1)
193-204: Add missing regional CDN hosts and use suffix-based matching for HF CDN domains.The current allowlist is incomplete. Based on Hugging Face infrastructure, missing hosts include:
cdn-lfs-eu-1.huggingface.co(EU region)- All
.hf.coCDN variants:cdn-lfs.hf.co,cdn-lfs-us-1.hf.co,cdn-lfs-eu-1.hf.coThis causes auth failures for users in unsupported regions. Switch to suffix-based matching instead of hardcoding each variant:
♻️ Suggested fix using suffix matching
static bool _isTrustedHfHost(String host) { final normalized = host.toLowerCase(); const trustedHosts = { 'huggingface.co', 'hf.co', }; - return trustedHosts.contains(normalized); + if (trustedHosts.contains(normalized)) return true; + // Match any cdn-lfs*.huggingface.co or cdn-lfs*.hf.co subdomain + if ((normalized.endsWith('.huggingface.co') || + normalized.endsWith('.hf.co')) && + normalized.startsWith('cdn-lfs')) { + return true; + } + return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/model_downloader.dart` around lines 193 - 204, The _isTrustedHfHost function currently hardcodes a small set of HF hosts; update it to accept regional CDN variants by switching to suffix-based matching: replace the trustedHosts set with checks that return true if the normalized host equals "huggingface.co" or "hf.co" or endsWith(".huggingface.co") or endsWith(".hf.co") (this will cover cdn-lfs, cdn-lfs-us-1, cdn-lfs-eu-1 and any other regional cdn-lfs.* variants); modify the logic in _isTrustedHfHost to use these equality/endsWith checks against the host variable so all regional CDN hosts are allowed.lib/widgets/settings/model_picker_section.dart (1)
60-71: Consider making the custom model config a const.The
LlmModelConfigfor the custom model option is recreated on every build. Since all values are compile-time constants, this can be aconst:Extract to const
+ static const _customModelConfig = LlmModelConfig( + id: 'custom', + name: 'Custom model', + downloadUrl: '', + estimatedSizeMB: 0, + description: 'Use any GGUF URL', + isCustom: true, + ); + // In build method: _ModelOptionCard( - config: LlmModelConfig( - id: 'custom', - name: 'Custom model', - downloadUrl: '', - estimatedSizeMB: 0, - description: 'Use any GGUF URL', - isCustom: true, - ), + config: _customModelConfig, isSelected: currentConfig.isCustom, onTap: onCustomModelTap, ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/widgets/settings/model_picker_section.dart` around lines 60 - 71, The LlmModelConfig instance passed to _ModelOptionCard is recreated on every build even though all fields are compile-time constants; make that instance a const by changing the LlmModelConfig(...) to const LlmModelConfig(...) so the widget tree can benefit from const construction (refer to _ModelOptionCard, LlmModelConfig, currentConfig.isCustom, and onCustomModelTap to locate the usage).lib/services/output_validator.dart (2)
262-268: Minor: Redundant lowercase conversion.The output is lowercased at line 263, but the regex patterns already use
caseSensitive: false. The lowercase conversion is harmless but unnecessary.Optional simplification
static bool _hasProhibitedContent(String output) { - final lower = output.toLowerCase(); for (final pattern in _prohibitedPatterns) { - if (pattern.hasMatch(lower)) return true; + if (pattern.hasMatch(output)) return true; } return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/output_validator.dart` around lines 262 - 268, The helper _hasProhibitedContent currently lowercases the input into the local variable lower even though the regexes in _prohibitedPatterns are created with caseSensitive: false; remove the redundant lowercase conversion and use the original output string (pass output directly to pattern.hasMatch) and eliminate the now-unused local variable lower to simplify the function.
128-146: Consider normalizing numeric strings for comparison.The string-based number comparison may cause false positives when the same numeric value is formatted differently (e.g.,
"1.8"vs"1.80", or"0.7"vs".7"). Consider parsing todoublefor comparison:Proposed approach
for (final value in outputValues) { - // Check if this value appears in context (exact match on the number) - final number = value.number; - if (!contextNumbers.contains(number) && - !contextValues.any((cv) => cv.number == number)) { + // Check if this value appears in context (numeric comparison) + final parsedNumber = double.tryParse(value.number); + if (parsedNumber == null) continue; + + final inContext = contextNumbers.any((cn) => + double.tryParse(cn) == parsedNumber) || + contextValues.any((cv) => + double.tryParse(cv.number) == parsedNumber); + if (!inContext) { return true; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/output_validator.dart` around lines 128 - 146, The comparison in _hasHallucinatedValues uses string equality which fails for equivalent numeric formats; update it to parse numeric strings to doubles (using double.tryParse) and compare numeric equality with a small epsilon (e.g., 1e-6). Specifically, convert contextNumbers and contextValues.map((cv)=>cv.number) into a Set<double> of parsed values (skip or fall back to original string if parsing fails), then for each output value (value.number) parse to double and check membership against that Set using fabs(diff) < epsilon; if parsing fails, fall back to the existing string comparisons to avoid dropping non-numeric tokens. Ensure the changes are applied inside _hasHallucinatedValues and reference helpers _extractMedicalValues and _extractBareNumbers as the sources of the raw strings.lib/services/intent_classifier.dart (1)
254-259: Cache versioning only detects count changes, not content changes.The version is computed from example counts (
lengthsum). If example text is modified without changing the count, the cache won't be invalidated and stale centroids will be used. Consider hashing the example content:Content-based versioning
+ import 'dart:convert' show utf8; + import 'package:crypto/crypto.dart' show sha256; /// Cache version derived from the hash of all canonical examples. - static int get _cacheVersion => - IntentExamples.labInterpretation.length + - IntentExamples.generalHealthEducation.length + - IntentExamples.offTopic.length; + static String get _cacheVersion { + final allExamples = [ + ...IntentExamples.labInterpretation, + ...IntentExamples.generalHealthEducation, + ...IntentExamples.offTopic, + ].join('|'); + return sha256.convert(utf8.encode(allExamples)).toString().substring(0, 16); + }Note: This would require updating the version field type from
inttoStringin the JSON schema.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/intent_classifier.dart` around lines 254 - 259, The current _cacheVersion getter only sums list lengths so content edits don't invalidate caches; replace it with a content-based version by hashing the example texts from IntentExamples.labInterpretation, IntentExamples.generalHealthEducation, and IntentExamples.offTopic (e.g., concatenate canonical example strings in a stable order and compute a SHA-256 or similar hex digest) and return that digest as the version; update any code expecting an int version and change the persisted JSON schema field type from int to String to store the hash version. Ensure the getter name _cacheVersion and all places that read/write the version are updated to accept the new String and that the concatenation is deterministic (stable ordering) to avoid spurious changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/models/strictness_mode.dart`:
- Around line 5-17: The enum StrictnessMode defines healthPlusWellness but
_applyStrictnessMode in query_router.dart only checks for
StrictnessMode.labOnly, so healthPlusWellness currently behaves like
strictHealthOnly; update _applyStrictnessMode to explicitly handle
StrictnessMode.healthPlusWellness (either implement the intended broader
wellness-allowing filters/conditions or, if not yet implemented, add a clear
TODO comment and a dedicated branch so the intent is explicit). Touch the
StrictnessMode enum reference and the _applyStrictnessMode method to add the new
branch, document the behavior in a comment, and ensure unit tests or guards
reflect the new case.
In `@lib/screens/onboarding_screen.dart`:
- Around line 206-216: The TextButton with label 'View Data Policy' currently
has an empty onPressed and must be wired up; update the onPressed in the widget
(the TextButton inside the isLast branch) to perform a real action such as
navigating to your data policy UI (e.g., Navigator.push to a DataPolicyScreen or
pushing a route named '/data-policy') or opening an external URL using a
launcher (e.g., launchUrl with the policy link). Ensure you handle errors/absent
URL with a fallback (snackbar or dialog) so the button always gives feedback.
In `@lib/widgets/settings/hf_token_tile.dart`:
- Around line 28-62: Wrap the storage calls in _loadToken, _saveToken, and
_clearToken (where HfTokenService.getToken and HfTokenService.setToken are
awaited) in try/catch/finally so exceptions cannot leave the widget in a broken
state; on error catch the exception, ensure you still update UI state (e.g., set
_loading = false in _loadToken and reset/local state in _saveToken/_clearToken)
inside mounted checks, and surface a user-visible message via
ScaffoldMessenger.of(context).showSnackBar with a brief error text (include
exception.message if available) so failures are reported instead of silently
breaking the tile.
---
Nitpick comments:
In `@lib/screens/onboarding_screen.dart`:
- Around line 111-115: buttonLabels is a separate array that must match
_pages.length, which causes a runtime index error if pages change; move the
label into the page model and stop using the separate buttonLabels list by
adding a String label field to _OnboardingPageData (e.g., label), populate each
_pages entry with its label, and update the build method to read the button text
from _pages[currentIndex].label (or similar) instead of buttonLabels to
guarantee labels stay coupled to page data.
In `@lib/services/intent_classifier.dart`:
- Around line 254-259: The current _cacheVersion getter only sums list lengths
so content edits don't invalidate caches; replace it with a content-based
version by hashing the example texts from IntentExamples.labInterpretation,
IntentExamples.generalHealthEducation, and IntentExamples.offTopic (e.g.,
concatenate canonical example strings in a stable order and compute a SHA-256 or
similar hex digest) and return that digest as the version; update any code
expecting an int version and change the persisted JSON schema field type from
int to String to store the hash version. Ensure the getter name _cacheVersion
and all places that read/write the version are updated to accept the new String
and that the concatenation is deterministic (stable ordering) to avoid spurious
changes.
In `@lib/services/llm_embedding_service.dart`:
- Around line 61-62: The section comment stating "DOWNLOAD — no HF token needed"
is now inaccurate because the code fetches and passes authToken (variable
authToken) later; update that comment to reflect that an auth token may be used
(e.g., "DOWNLOAD — may use HF authToken when available") so it matches the
behavior where authToken is fetched/passed in the download flow.
In `@lib/services/llm_service.dart`:
- Around line 360-370: The _formatPrompt function currently iterates over all
entries in conversationHistory; change it to only include up to the last two
turns (one user + one assistant) to enforce the doc comment. When building the
history block in _formatPrompt, take the tail of conversationHistory (e.g., last
up to 2 items) rather than the entire list, preserve the original chronological
order when writing each turn, and then write roles using
LlmStrings.historyUserRole / LlmStrings.historyAssistantRole as before; this
ensures the prompt’s history section is bounded and prevents unbounded prompt
growth.
In `@lib/services/model_downloader.dart`:
- Around line 193-204: The _isTrustedHfHost function currently hardcodes a small
set of HF hosts; update it to accept regional CDN variants by switching to
suffix-based matching: replace the trustedHosts set with checks that return true
if the normalized host equals "huggingface.co" or "hf.co" or
endsWith(".huggingface.co") or endsWith(".hf.co") (this will cover cdn-lfs,
cdn-lfs-us-1, cdn-lfs-eu-1 and any other regional cdn-lfs.* variants); modify
the logic in _isTrustedHfHost to use these equality/endsWith checks against the
host variable so all regional CDN hosts are allowed.
In `@lib/services/output_validator.dart`:
- Around line 262-268: The helper _hasProhibitedContent currently lowercases the
input into the local variable lower even though the regexes in
_prohibitedPatterns are created with caseSensitive: false; remove the redundant
lowercase conversion and use the original output string (pass output directly to
pattern.hasMatch) and eliminate the now-unused local variable lower to simplify
the function.
- Around line 128-146: The comparison in _hasHallucinatedValues uses string
equality which fails for equivalent numeric formats; update it to parse numeric
strings to doubles (using double.tryParse) and compare numeric equality with a
small epsilon (e.g., 1e-6). Specifically, convert contextNumbers and
contextValues.map((cv)=>cv.number) into a Set<double> of parsed values (skip or
fall back to original string if parsing fails), then for each output value
(value.number) parse to double and check membership against that Set using
fabs(diff) < epsilon; if parsing fails, fall back to the existing string
comparisons to avoid dropping non-numeric tokens. Ensure the changes are applied
inside _hasHallucinatedValues and reference helpers _extractMedicalValues and
_extractBareNumbers as the sources of the raw strings.
In `@lib/widgets/settings/embedding_model_tile.dart`:
- Around line 104-131: The Row containing the action buttons (the block using
Row with FilledButton.icon, FilledButton.tonal and OutlinedButton) can render
adjacent buttons during quick state transitions; add spacing between the buttons
by replacing or wrapping the children with a spacing solution (e.g., use
Wrap(spacing: <n>, runSpacing: <n>) around the buttons or insert SizedBox(width:
<n>) paddings between the FilledButton.icon, FilledButton.tonal and
OutlinedButton) so the buttons for modelInfo.canDownload / modelInfo.canLoad /
modelInfo.isUsable never appear glued together; keep the conditional checks
(modelInfo.status...) for the CircularProgressIndicator unchanged.
In `@lib/widgets/settings/model_picker_section.dart`:
- Around line 60-71: The LlmModelConfig instance passed to _ModelOptionCard is
recreated on every build even though all fields are compile-time constants; make
that instance a const by changing the LlmModelConfig(...) to const
LlmModelConfig(...) so the widget tree can benefit from const construction
(refer to _ModelOptionCard, LlmModelConfig, currentConfig.isCustom, and
onCustomModelTap to locate the usage).
In `@lib/widgets/settings/settings_row.dart`:
- Around line 60-66: The subtitle Text uses a hardcoded TextStyle (fontSize: 12)
— replace it with the app theme typography (e.g.,
Theme.of(context).textTheme.bodySmall or subtitle2) to ensure consistency;
update the Text call in settings_row.dart (the Text(subtitle!, style: const
TextStyle(...)) line) to use a non-const style pulled from
Theme.of(context).textTheme and, if needed, merge in the existing color
(AppColors.textSecondary) via .copyWith(color: ...) so you preserve color while
adopting theme typography.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e17eafd2-db73-44f6-b3fa-dd076cbc56f3
📒 Files selected for processing (31)
lib/constants/ai_prompts.dartlib/constants/intent_examples.dartlib/constants/llm_strings.dartlib/constants/response_templates.dartlib/constants/token_budgets.dartlib/constants/validation_strings.dartlib/models/query_decision.dartlib/models/strictness_mode.dartlib/screens/chat_screen.dartlib/screens/onboarding_screen.dartlib/screens/settings_screen.dartlib/services/chat_context_builder.dartlib/services/hf_token_service.dartlib/services/intent_classifier.dartlib/services/intent_prefilter.dartlib/services/llm_embedding_service.dartlib/services/llm_service.dartlib/services/model_downloader.dartlib/services/output_validator.dartlib/services/query_router.dartlib/services/vector_store_service.dartlib/widgets/settings/custom_model_dialog.dartlib/widgets/settings/embedding_model_tile.dartlib/widgets/settings/hf_token_tile.dartlib/widgets/settings/model_picker_section.dartlib/widgets/settings/section_header.dartlib/widgets/settings/settings_row.dartlib/widgets/settings/settings_widgets.darttest/services/intent_prefilter_test.darttest/services/output_validator_test.darttest/services/query_router_test.dart
| enum StrictnessMode { | ||
| /// Lab interpretation + general health education. Blocks everything else. | ||
| /// This is the default launch mode. | ||
| strictHealthOnly, | ||
|
|
||
| /// Only questions directly tied to the user's lab data. | ||
| /// Blocks even general health education. | ||
| labOnly, | ||
|
|
||
| /// Broader lifestyle and wellness guidance on top of [strictHealthOnly]. | ||
| /// Still blocks unrelated general knowledge. | ||
| healthPlusWellness, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check how StrictnessMode values are used across the codebase
# Search for all usages of StrictnessMode enum values
rg -n 'StrictnessMode\.' --type=dart -C2
# Search for switch/if statements involving kPolicyMode
rg -n 'kPolicyMode' --type=dart -C3Repository: priyavratuniyal/koshika
Length of output: 2585
Implement distinct handling for healthPlusWellness mode or mark it as future work.
The healthPlusWellness enum value is defined with documentation describing "broader lifestyle and wellness guidance," but the _applyStrictnessMode method in query_router.dart only checks for StrictnessMode.labOnly. This means healthPlusWellness currently behaves identically to strictHealthOnly (queries pass through unchanged).
Either implement the distinct logic for healthPlusWellness (e.g., allowing additional wellness query types) or add a // TODO comment if this mode is reserved for future use.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/models/strictness_mode.dart` around lines 5 - 17, The enum StrictnessMode
defines healthPlusWellness but _applyStrictnessMode in query_router.dart only
checks for StrictnessMode.labOnly, so healthPlusWellness currently behaves like
strictHealthOnly; update _applyStrictnessMode to explicitly handle
StrictnessMode.healthPlusWellness (either implement the intended broader
wellness-allowing filters/conditions or, if not yet implemented, add a clear
TODO comment and a dedicated branch so the intent is explicit). Touch the
StrictnessMode enum reference and the _applyStrictnessMode method to add the new
branch, document the behavior in a comment, and ensure unit tests or guards
reflect the new case.
| if (isLast) | ||
| Padding( | ||
| padding: const EdgeInsets.only(bottom: KoshikaSpacing.lg), | ||
| child: TextButton( | ||
| onPressed: () {}, | ||
| child: Text( | ||
| 'View Data Policy', | ||
| style: TextStyle(color: AppColors.textSecondary), | ||
| ), | ||
| ), | ||
| ), |
There was a problem hiding this comment.
"View Data Policy" button has no implementation.
The onPressed: () {} callback does nothing, leaving users with a non-functional button. This should either navigate to a data policy screen, open a URL, or at minimum show a placeholder message.
🛠️ Minimal placeholder implementation
child: TextButton(
- onPressed: () {},
+ onPressed: () {
+ ScaffoldMessenger.of(context).showSnackBar(
+ const SnackBar(content: Text('Data Policy coming soon')),
+ );
+ },
child: Text(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (isLast) | |
| Padding( | |
| padding: const EdgeInsets.only(bottom: KoshikaSpacing.lg), | |
| child: TextButton( | |
| onPressed: () {}, | |
| child: Text( | |
| 'View Data Policy', | |
| style: TextStyle(color: AppColors.textSecondary), | |
| ), | |
| ), | |
| ), | |
| if (isLast) | |
| Padding( | |
| padding: const EdgeInsets.only(bottom: KoshikaSpacing.lg), | |
| child: TextButton( | |
| onPressed: () { | |
| ScaffoldMessenger.of(context).showSnackBar( | |
| const SnackBar(content: Text('Data Policy coming soon')), | |
| ); | |
| }, | |
| child: Text( | |
| 'View Data Policy', | |
| style: TextStyle(color: AppColors.textSecondary), | |
| ), | |
| ), | |
| ), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/screens/onboarding_screen.dart` around lines 206 - 216, The TextButton
with label 'View Data Policy' currently has an empty onPressed and must be wired
up; update the onPressed in the widget (the TextButton inside the isLast branch)
to perform a real action such as navigating to your data policy UI (e.g.,
Navigator.push to a DataPolicyScreen or pushing a route named '/data-policy') or
opening an external URL using a launcher (e.g., launchUrl with the policy link).
Ensure you handle errors/absent URL with a fallback (snackbar or dialog) so the
button always gives feedback.
| Future<void> _loadToken() async { | ||
| final token = await HfTokenService.getToken(); | ||
| if (mounted) { | ||
| setState(() { | ||
| _hasToken = token != null; | ||
| if (token != null) _controller.text = token; | ||
| _loading = false; | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| Future<void> _saveToken() async { | ||
| final token = _controller.text.trim(); | ||
| if (token.isEmpty) return; | ||
| await HfTokenService.setToken(token); | ||
| if (mounted) { | ||
| setState(() => _hasToken = true); | ||
| ScaffoldMessenger.of( | ||
| context, | ||
| ).showSnackBar(const SnackBar(content: Text(LlmStrings.hfTokenSaved))); | ||
| } | ||
| } | ||
|
|
||
| Future<void> _clearToken() async { | ||
| await HfTokenService.setToken(null); | ||
| if (mounted) { | ||
| setState(() { | ||
| _hasToken = false; | ||
| _controller.clear(); | ||
| }); | ||
| ScaffoldMessenger.of( | ||
| context, | ||
| ).showSnackBar(const SnackBar(content: Text(LlmStrings.hfTokenCleared))); | ||
| } | ||
| } |
There was a problem hiding this comment.
Handle token storage exceptions to avoid broken UI state.
At Line 29, Line 42, and Line 52, storage operations are awaited without try/catch. A thrown error can leave the tile in a bad state (e.g., perpetual loading or no feedback after save/clear failure).
💡 Proposed fix
Future<void> _loadToken() async {
- final token = await HfTokenService.getToken();
- if (mounted) {
- setState(() {
- _hasToken = token != null;
- if (token != null) _controller.text = token;
- _loading = false;
- });
- }
+ try {
+ final token = await HfTokenService.getToken();
+ if (!mounted) return;
+ setState(() {
+ _hasToken = token != null;
+ if (token != null) _controller.text = token;
+ _loading = false;
+ });
+ } catch (_) {
+ if (!mounted) return;
+ setState(() => _loading = false);
+ }
}
Future<void> _saveToken() async {
final token = _controller.text.trim();
if (token.isEmpty) return;
- await HfTokenService.setToken(token);
- if (mounted) {
- setState(() => _hasToken = true);
- ScaffoldMessenger.of(
- context,
- ).showSnackBar(const SnackBar(content: Text(LlmStrings.hfTokenSaved)));
- }
+ try {
+ await HfTokenService.setToken(token);
+ if (!mounted) return;
+ setState(() => _hasToken = true);
+ ScaffoldMessenger.of(context).showSnackBar(
+ const SnackBar(content: Text(LlmStrings.hfTokenSaved)),
+ );
+ } catch (_) {
+ if (!mounted) return;
+ // Prefer a centralized error string in LlmStrings.
+ ScaffoldMessenger.of(context).showSnackBar(
+ const SnackBar(content: Text('Could not save token. Please retry.')),
+ );
+ }
}
Future<void> _clearToken() async {
- await HfTokenService.setToken(null);
- if (mounted) {
- setState(() {
- _hasToken = false;
- _controller.clear();
- });
- ScaffoldMessenger.of(
- context,
- ).showSnackBar(const SnackBar(content: Text(LlmStrings.hfTokenCleared)));
- }
+ try {
+ await HfTokenService.setToken(null);
+ if (!mounted) return;
+ setState(() {
+ _hasToken = false;
+ _controller.clear();
+ });
+ ScaffoldMessenger.of(context).showSnackBar(
+ const SnackBar(content: Text(LlmStrings.hfTokenCleared)),
+ );
+ } catch (_) {
+ if (!mounted) return;
+ ScaffoldMessenger.of(context).showSnackBar(
+ const SnackBar(content: Text('Could not clear token. Please retry.')),
+ );
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/widgets/settings/hf_token_tile.dart` around lines 28 - 62, Wrap the
storage calls in _loadToken, _saveToken, and _clearToken (where
HfTokenService.getToken and HfTokenService.setToken are awaited) in
try/catch/finally so exceptions cannot leave the widget in a broken state; on
error catch the exception, ensure you still update UI state (e.g., set _loading
= false in _loadToken and reset/local state in _saveToken/_clearToken) inside
mounted checks, and surface a user-visible message via
ScaffoldMessenger.of(context).showSnackBar with a brief error text (include
exception.message if available) so failures are reported instead of silently
breaking the tile.
Changes in the onboarding banner of the app
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Tests