Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,34 @@
.qodo
# Python
__pycache__/
*.py[cod]
*$py.class
venv/
env/
.env

# Models (Extremely Important - Do not push 500MB+ files)
**/models/*.gguf
**/models/*.safetensors
**/models/*.bin
**/models/config.json
**/models/adapter_config.json

# Node / React Native
node_modules/
.expo/
dist/
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# Mac / OS specific
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes

# Debugging
ocr_debug.txt
*.logBackend/db/chromadb/chroma.sqlite3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the concatenated ignore pattern.

Line 34 appears to merge two patterns into one, which won’t match either .log files or the sqlite path. Split into two lines.

🛠️ Proposed fix
-*.logBackend/db/chromadb/chroma.sqlite3
+*.log
+Backend/db/chromadb/chroma.sqlite3
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
*.logBackend/db/chromadb/chroma.sqlite3
*.log
Backend/db/chromadb/chroma.sqlite3
🤖 Prompt for AI Agents
In @.gitignore at line 34, The .gitignore line currently contains a concatenated
pattern "*.logBackend/db/chromadb/chroma.sqlite3" which won't match either
target; split this into two separate ignore entries by replacing the single
concatenated token with two lines: "*.log" and
"Backend/db/chromadb/chroma.sqlite3" so that .log files and the chroma.sqlite3
path are each ignored properly.

50 changes: 17 additions & 33 deletions Backend/agent/handlers/appointment.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,44 +81,28 @@ def parse_appointment_command(query: str):
}

def parse_date(date_str):
"""Parse date string to ISO format."""
if not date_str:
return None

if not date_str: return None
today = datetime.now()
date_str_lower = date_str.lower()
date_str_lower = date_str.lower().strip()

if date_str_lower == 'today':
return today.strftime('%Y-%m-%d')
elif date_str_lower == 'tomorrow':
return (today + timedelta(days=1)).strftime('%Y-%m-%d')
elif date_str_lower == 'next week':
return (today + timedelta(days=7)).strftime('%Y-%m-%d')

# Try to parse as MM/DD or MM/DD/YYYY
try:
if '/' in date_str:
parts = date_str.split('/')
if len(parts) == 2:
month, day = int(parts[0]), int(parts[1])
year = today.year
if month < today.month or (month == today.month and day < today.day):
year += 1
return f"{year}-{month:02d}-{day:02d}"
elif len(parts) == 3:
month, day, year = int(parts[0]), int(parts[1]), int(parts[2])
return f"{year}-{month:02d}-{day:02d}"
except:
pass
if date_str_lower == 'next month':
next_month = (today.month % 12) + 1
year = today.year + (1 if today.month == 12 else 0)
return f"{year}-{next_month:02d}-01"

# Try to parse as YYYY-MM-DD
try:
datetime.strptime(date_str, '%Y-%m-%d')
return date_str
except:
pass
day_mapping = {
'monday': 0, 'mon': 0, 'tuesday': 1, 'tue': 1,
'wednesday': 2, 'wed': 2, 'thursday': 3, 'thu': 3, 'thurs': 3,
'friday': 4, 'fri': 4, 'saturday': 5, 'sat': 5, 'sunday': 6, 'sun': 6,
}

if date_str_lower in day_mapping:
days_ahead = day_mapping[date_str_lower] - today.weekday()
if days_ahead <= 0: days_ahead += 7
return (today + timedelta(days=days_ahead)).strftime('%Y-%m-%d')

return None
return date_str
Comment on lines 83 to +105
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing handling for "today" and "tomorrow" keywords.

The date_patterns in parse_appointment_command (lines 39-41) extract "today" and "tomorrow", but parse_date does not handle these cases. When a user says "schedule appointment for today", the function returns the literal string "today" instead of a valid YYYY-MM-DD date, which will break the frontend calendar (expects YYYY-MM-DD per CalendarScreen.jsx).

Proposed fix to add missing date handling
 def parse_date(date_str):
-    if not date_str: return None
+    if not date_str:
+        return None
     today = datetime.now()
     date_str_lower = date_str.lower().strip()
     
+    if date_str_lower == 'today':
+        return today.strftime('%Y-%m-%d')
+    
+    if date_str_lower == 'tomorrow':
+        return (today + timedelta(days=1)).strftime('%Y-%m-%d')
+    
+    if date_str_lower == 'next week':
+        return (today + timedelta(weeks=1)).strftime('%Y-%m-%d')
     
     if date_str_lower == 'next month':
         next_month = (today.month % 12) + 1
         year = today.year + (1 if today.month == 12 else 0)
         return f"{year}-{next_month:02d}-01"
     
     day_mapping = {
         'monday': 0, 'mon': 0, 'tuesday': 1, 'tue': 1,
         'wednesday': 2, 'wed': 2, 'thursday': 3, 'thu': 3, 'thurs': 3,
         'friday': 4, 'fri': 4, 'saturday': 5, 'sat': 5, 'sunday': 6, 'sun': 6,
     }
     
     if date_str_lower in day_mapping:
         days_ahead = day_mapping[date_str_lower] - today.weekday()
-        if days_ahead <= 0: days_ahead += 7
+        if days_ahead <= 0:
+            days_ahead += 7
         return (today + timedelta(days=days_ahead)).strftime('%Y-%m-%d')
     
     return date_str

Based on learnings: "Use the fine-tuned SLM model as the primary date extraction method. Implement a robust regex-based fallback to cover cases the model may miss."

🧰 Tools
🪛 Ruff (0.14.14)

84-84: Multiple statements on one line (colon)

(E701)


102-102: Multiple statements on one line (colon)

(E701)

🤖 Prompt for AI Agents
In `@Backend/agent/handlers/appointment.py` around lines 83 - 105, parse_date
currently doesn't handle "today" and "tomorrow", so when
parse_appointment_command extracts those tokens (via date_patterns) the function
returns the raw string and breaks the frontend; update parse_date to detect
"today" and "tomorrow" (case-insensitive) and return ISO dates (YYYY-MM-DD)
computed from datetime.now() / now + 1 day; keep existing handling for weekdays
and "next month". Also verify parse_appointment_command still uses its
date_patterns as primary extractor and that parse_date is the robust fallback
for these keywords.


def parse_time(time_str):
"""Parse time string to HH:MM format."""
Expand Down
75 changes: 25 additions & 50 deletions Backend/agent/llm.py
Original file line number Diff line number Diff line change
@@ -1,55 +1,30 @@
from llama_cpp import Llama


llm = Llama(
model_path="./models/qwen2-0_5b-instruct-q4_k_m.gguf",
lora_path="./models/adapter_model.bin",
n_ctx=512,
n_gpu_layers=-1
)
Comment on lines +4 to +9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Module-level model initialization blocks imports and uses inconsistent paths.

  1. Initializing the LLM at module load blocks all imports of this module until the model loads, and raises an unrecoverable exception if the file is missing.
  2. The adapter path (adapter_model.bin) differs from app.py which uses medical_adapter.gguf.

Consider lazy initialization (as done in app.py with get_llm()) and aligning the adapter filename.

Proposed lazy initialization pattern
 from llama_cpp import Llama
+import os

-llm = Llama(
-    model_path="./models/qwen2-0_5b-instruct-q4_k_m.gguf", 
-    lora_path="./models/adapter_model.bin",               
-    n_ctx=512,                                            
-    n_gpu_layers=-1                                       
-)
+_llm = None
+
+def _get_llm():
+    global _llm
+    if _llm is None:
+        base_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "models")
+        _llm = Llama(
+            model_path=os.path.join(base_dir, "qwen2-0_5b-instruct-q4_k_m.gguf"),
+            lora_path=os.path.join(base_dir, "medical_adapter.gguf"),  # Align with app.py
+            n_ctx=512,
+            n_gpu_layers=-1
+        )
+    return _llm

 def run_llm(prompt: str) -> str:
     """Actual inference logic for medical extraction."""
+    llm = _get_llm()
     output = llm(
🤖 Prompt for AI Agents
In `@Backend/agent/llm.py` around lines 4 - 9, The module currently instantiates
llm = Llama(...) at import time using model_path and lora_path, which blocks
imports and fails fatally if files are missing; change this to a
lazy-initializer function (similar to app.py's get_llm()) that returns a cached
Llama instance on first call and defers loading until needed, and update the
lora_path value to match app.py's adapter filename ("medical_adapter.gguf") so
paths are consistent; ensure the initializer handles missing files by raising a
clear, catchable exception or returning None instead of crashing during module
import, and reference the Llama class and the module-level llm symbol when
making the replacement.


def run_llm(prompt: str) -> str:
"""
Run LLM inference.

For the offline BabyNest app:
- This will be called from the frontend using Llama.rn
- The frontend will handle the actual LLM inference
- This function prepares the prompt for frontend processing

Args:
prompt: The formatted prompt with user context and query

Returns:
str: LLM response (will be replaced by frontend Llama.rn call)
"""
# TODO: Replace with frontend Llama.rn integration
# For now, return a structured response based on the prompt content

if "weight" in prompt.lower():
return """Based on your weight tracking data, you're showing a healthy pattern.
Your weight gain is within normal ranges for pregnancy. Continue monitoring weekly
and consult your healthcare provider if you notice any sudden changes."""

elif "appointment" in prompt.lower():
return """I can help you manage your appointments. Based on your current week,
you should focus on regular prenatal checkups. Would you like me to suggest
optimal scheduling times or help reschedule any missed appointments?"""

elif "symptoms" in prompt.lower():
return """I see you're tracking various symptoms. This is normal during pregnancy.
Continue monitoring and report any concerning symptoms to your healthcare provider.
Your tracking data helps identify patterns that may need attention."""

else:
return """I'm here to support your pregnancy journey! Based on your current week
and tracking data, you're doing well. Remember to stay hydrated, get adequate rest,
and maintain regular prenatal care. Is there anything specific you'd like to know
about your current pregnancy week?"""
"""Actual inference logic for medical extraction."""
output = llm(
prompt,
max_tokens=256,
stop=["}"],
temperature=0
)
response = output['choices'][0]['text'].strip()
# Ensuring valid JSON structure
return response + "}" if not response.endswith("}") else response
Comment on lines +19 to +21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unsafe access to LLM output structure.

Accessing output['choices'][0]['text'] without validation could raise KeyError or IndexError if the model returns an unexpected response.

Defensive access pattern
-    response = output['choices'][0]['text'].strip()
+    choices = output.get('choices', [])
+    if not choices:
+        return "{}"
+    response = choices[0].get('text', '').strip()
🤖 Prompt for AI Agents
In `@Backend/agent/llm.py` around lines 19 - 21, The code directly indexes
output['choices'][0]['text'], which can raise KeyError/IndexError; update the
access in the block that computes response (the use of variable output and the
expression output['choices'][0]['text']) to defensively validate the structure:
check that output is a dict, that output.get('choices') is a non-empty list, and
that the first choice is a dict containing a 'text' key (or a
'message'->'content' fallback) before reading it; if any check fails, use a safe
fallback string or raise a clear exception, then apply the existing "ensure
trailing }" logic to that sanitized response.


def prepare_prompt_for_frontend(prompt: str) -> dict:
"""
Prepare prompt for frontend Llama.rn processing.

Args:
prompt: The formatted prompt

Returns:
dict: Structured data for frontend LLM processing
"""
"""Prepare prompt for future frontend Llama.rn processing."""
return {
"prompt": prompt,
"max_tokens": 500,
"temperature": 0.7,
"system_message": "You are BabyNest, an empathetic pregnancy companion providing personalized, evidence-based guidance."
}
"max_tokens": 150,
"temperature": 0.1,
"system_message": "You are BabyNest, an empathetic pregnancy companion. Extract medical data into JSON."
}
Loading