feat(backend): implement centralized database service layer for health routes#184
feat(backend): implement centralized database service layer for health routes#184pranjal9091 wants to merge 4 commits intoAOSSIE-Org:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds OCR-driven medical report scanning: frontend image picker and ScanReportScreen; backend OCR endpoint, Llama-based extraction, and persistence endpoints; introduces a centralized db_service with broad CRUD support; registers new route blueprints; updates iOS build config, dependencies, and .gitignore. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Frontend as "Frontend\nScanReportScreen"
participant Backend as "Backend\n/app.py"
participant OCR as "Tesseract\nOCR"
participant LLM as "Llama\nLLM"
participant DB as "SQLite\n(db_service)"
User->>Frontend: Open ScanReportScreen & select image
Frontend->>Backend: POST /api/ocr-scan (base64 image)
Backend->>OCR: preprocess image & run OCR
OCR-->>Backend: raw text
Backend->>LLM: run_llm(OCR text) → request structured JSON
LLM-->>Backend: structured JSON (weight, bp, appointment)
Backend-->>Frontend: OCR+LLM results
Frontend->>Backend: POST /api/weight and/or /api/blood-pressure
Backend->>DB: db_service.save_* (persist) + cache sync
DB-->>Backend: success
Backend-->>Frontend: confirmation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Backend/routes/weight.py (1)
37-49: Route collision: both endpoints match the same URL pattern.Lines 37-41 (
/weight/<int:week>) and lines 44-49 (/weight/<int:id>) both register GET handlers for/weight/<int:...>. Flask will only match the first registered route, makingget_weight(by ID) unreachable.Proposed fix: differentiate the route paths
# Read by week -@weight_bp.route('/weight/<int:week>', methods=['GET']) +@weight_bp.route('/weight/week/<int:week>', methods=['GET']) def get_week_weight(week): db = open_db() weights = db.execute('SELECT * FROM weekly_weight WHERE week_number = ?', (week,)).fetchall() return jsonify([dict(row) for row in weights]), 200 # Read by ID -@weight_bp.route('/weight/<int:id>', methods=['GET']) +@weight_bp.route('/weight/id/<int:id>', methods=['GET']) def get_weight(id): weight=db_service.get_weight_by_id(id)
🤖 Fix all issues with AI agents
In @.gitignore:
- Line 34: The .gitignore line currently contains a concatenated pattern
"*.logBackend/db/chromadb/chroma.sqlite3" which won't match either target; split
this into two separate ignore entries by replacing the single concatenated token
with two lines: "*.log" and "Backend/db/chromadb/chroma.sqlite3" so that .log
files and the chroma.sqlite3 path are each ignored properly.
In `@Backend/agent/handlers/appointment.py`:
- Around line 83-105: parse_date currently doesn't handle "today" and
"tomorrow", so when parse_appointment_command extracts those tokens (via
date_patterns) the function returns the raw string and breaks the frontend;
update parse_date to detect "today" and "tomorrow" (case-insensitive) and return
ISO dates (YYYY-MM-DD) computed from datetime.now() / now + 1 day; keep existing
handling for weekdays and "next month". Also verify parse_appointment_command
still uses its date_patterns as primary extractor and that parse_date is the
robust fallback for these keywords.
In `@Backend/agent/llm.py`:
- Around line 19-21: The code directly indexes output['choices'][0]['text'],
which can raise KeyError/IndexError; update the access in the block that
computes response (the use of variable output and the expression
output['choices'][0]['text']) to defensively validate the structure: check that
output is a dict, that output.get('choices') is a non-empty list, and that the
first choice is a dict containing a 'text' key (or a 'message'->'content'
fallback) before reading it; if any check fails, use a safe fallback string or
raise a clear exception, then apply the existing "ensure trailing }" logic to
that sanitized response.
- Around line 4-9: The module currently instantiates llm = Llama(...) at import
time using model_path and lora_path, which blocks imports and fails fatally if
files are missing; change this to a lazy-initializer function (similar to
app.py's get_llm()) that returns a cached Llama instance on first call and
defers loading until needed, and update the lora_path value to match app.py's
adapter filename ("medical_adapter.gguf") so paths are consistent; ensure the
initializer handles missing files by raising a clear, catchable exception or
returning None instead of crashing during module import, and reference the Llama
class and the module-level llm symbol when making the replacement.
In `@Backend/app.py`:
- Around line 139-148: The save_weight Flask endpoint is inserting (user_id,
weight, date) which doesn't match the weekly_weight schema (week_number, weight,
note) and it bypasses the service layer; replace the direct DB logic in the
save_weight function (and remove open_db()/cursor usage) with a call to
db_service.save_weight(...) using the correct parameters (compute or accept
week_number, pass weight and optional note) and then call the AI cache update
entrypoint used elsewhere (e.g., ai_cache.refresh_weight or the service's
post-save hook) so the service layer handles persistence and cache invalidation
consistently with the rest of the app.
- Around line 150-159: The save_bp endpoint currently accepts and persists any
bp value and has no DB error handling; update the save_bp function to validate
that bp (bp_val) is present and matches the expected "systolic/diastolic"
pattern (e.g. regex like r'^\d{1,3}/\d{1,3}$'), return a 400 JSON error if
missing/invalid, and wrap the DB insert/commit in a try/except to catch and log
insertion errors and return a 500 JSON error on failure; keep using the existing
parameterized INSERT into blood_pressure_logs and ensure you reference save_bp,
bp_val, open_db(), and the INSERT statement when making these changes.
- Around line 34-35: Remove the duplicate import statements for os and Llama:
delete the repeated lines "import os" and "from llama_cpp import Llama" so only
the original single imports remain; ensure any uses of os and Llama still
reference the single retained imports and run a quick lint/flake8 to confirm no
remaining duplicate-import warnings.
- Line 7: Replace the hardcoded macOS path assigned to
pytesseract.pytesseract.tesseract_cmd by reading configuration from the
environment and falling back to a detected/system path: check an environment
variable like TESSERACT_CMD first, then try shutil.which("tesseract") to
discover the binary, and finally fall back to a sensible default (e.g.
"/usr/bin/tesseract") or raise a clear error; update the assignment where
pytesseract.pytesseract.tesseract_cmd is set so it uses this
environment/discovery logic instead of the fixed '/opt/homebrew/bin/tesseract'.
- Around line 243-244: The current __main__ block calls app.run(host='0.0.0.0',
port=5050, debug=True) which is unsafe for production; change it to read host
and debug from environment variables (e.g., APP_HOST defaulting to '127.0.0.1'
and APP_DEBUG defaulting to false) and pass those values into app.run(host=host,
port=port, debug=debug) so debug is disabled by default and binding is not
0.0.0.0 unless explicitly configured; update the __main__ guard that invokes
app.run accordingly and ensure APP_DEBUG is converted to a boolean before
passing to app.run.
In `@Backend/services/db_service.py`:
- Around line 56-65: In delete_weight_entry, fix the two typos: change the SQL
statement in db.execute from 'DELTE FROM weekly_weight WHERE id=?' to 'DELETE
FROM weekly_weight WHERE id=?' and update the agent.update_cache call to use
operation="delete" (instead of "delte"); keep the same parameters (entry_id) and
behavior otherwise so the function and AI sync work correctly.
- Around line 26-30: get_all_weight_entries currently queries the wrong table
name "weekly_weights" (plural) which is inconsistent with other code using
"weekly_weight" (singular); update the SQL in get_all_weight_entries from SELECT
* FROM weekly_weights to SELECT * FROM weekly_weight so it matches the rest of
the codebase and returns rows correctly using dict(row) in this function.
- Around line 32-34: The get_weight_by_id function has three bugs: rename the
local variable from df to db (or update the subsequent reference to use df) so
the executed query uses the correct connection returned by open_db(), call
fetchone() with parentheses to get the row (not the method), and use the
consistent table name weekly_weight (singular) to match other functions; update
the function body in get_weight_by_id accordingly so it executes
db.execute('SELECT * FROM weekly_weight WHERE id = ?', (entry_id,)).fetchone().
In `@Frontend/ios/BabyNest/PrivacyInfo.xcprivacy`:
- Around line 8-13: Remove the unused SystemBootTime privacy declaration block:
delete the NSPrivacyAccessedAPIType entry whose value is
NSPrivacyAccessedAPICategorySystemBootTime and its associated
NSPrivacyAccessedAPITypeReasons array (the reason code "35F9.1"); ensure no
other references to NSPrivacyAccessedAPICategorySystemBootTime remain in
PrivacyInfo.xcprivacy so the manifest only lists APIs actually used by the
codebase.
In `@Frontend/ios/Podfile`:
- Line 7: The Podfile's platform declaration platform :ios, '15.5' conflicts
with the Xcode project deployment target IPHONEOS_DEPLOYMENT_TARGET = 15.1 found
in project.pbxproj (both project- and target-level, Debug/Release); fix by
making them match: either lower the Podfile platform to '15.1' (update the
platform :ios line) or raise all IPHONEOS_DEPLOYMENT_TARGET entries in
project.pbxproj (and/or update the target deployment in Xcode) to '15.5' so pod
builds and app build settings are consistent.
In `@Frontend/package.json`:
- Line 47: Remove the unused tesseract.js dependency from package.json by
deleting the "tesseract.js": "^7.0.0" entry and then update the lockfile and
node_modules (run npm install or yarn) so the dependency is actually removed;
verify no imports reference tesseract.js in the codebase (e.g.,
ScanReportScreen.jsx) and run the build to confirm there are no leftover
references or bundling issues.
In `@Frontend/src/Screens/ScanReportScreen.jsx`:
- Around line 11-19: The image picker is launched twice: pickImage() already
gets imageUri and calls uploadAndScan(imageUri), but uploadAndScan(...) (the
function named uploadAndScan) ignores that parameter and re-opens the picker;
remove the redundant ImagePicker.launchImageLibrary call inside uploadAndScan
and make uploadAndScan use the passed-in uri (e.g., imageUri) to continue the
flow (set loading/state and upload/scan). Update uploadAndScan's signature and
internal logic to accept and operate on the uri parameter and delete the
secondary picker invocation so a single selection completes the scan.
- Around line 27-41: The POST requests that send weight and blood-pressure (the
fetch calls in ScanReportScreen.jsx) do not validate HTTP responses; update the
logic in the function containing those fetches (the weight and bp POST blocks)
to capture each fetch's Response, check response.ok, and handle non-ok statuses
by reading response.text/json and throwing or returning an error to surface to
the caller; on success continue as before. Ensure both the weight POST and
blood-pressure POST (the fetch for `${BASE_URL}/api/weight` and
`${BASE_URL}/api/blood-pressure`) have identical response.ok checks and proper
error handling (e.g., show alert or reject) so 4xx/5xx responses are not treated
as successful.
- Around line 5-6: The file ScanReportScreen.jsx currently defines a hardcoded
const BASE_URL which will break other developers and production; remove that
hardcoded declaration and instead import the environment value used elsewhere
(e.g., add import { BASE_URL } from '@env'; at the top of ScanReportScreen.jsx)
and keep all existing uses of BASE_URL unchanged; ensure the project .env
entries are present or documented so the env-based BASE_URL is available at
runtime.
- Around line 69-77: The OCR fetch call using BASE_URL to POST to
"/api/ocr-scan" parses res.json() without validating the HTTP status; update the
logic around the fetch (the code that assigns const res = await fetch(...) and
const data = await res.json()) to check res.ok first and handle non-2xx
responses by throwing or returning an error (include status and response
text/json), so downstream code doesn't assume a successful response; ensure you
still parse the error body when available and surface a helpful message or set
an error state instead of blindly using data.
🧹 Nitpick comments (1)
Backend/routes/weight.py (1)
3-4: Unused import.
osis imported but never used in this file.Remove unused import
from flask import Blueprint, request, jsonify from db.db import open_db -import os from services import db_service
|
|
||
| # Debugging | ||
| ocr_debug.txt | ||
| *.logBackend/db/chromadb/chroma.sqlite3 |
There was a problem hiding this comment.
Fix the concatenated ignore pattern.
Line 34 appears to merge two patterns into one, which won’t match either .log files or the sqlite path. Split into two lines.
🛠️ Proposed fix
-*.logBackend/db/chromadb/chroma.sqlite3
+*.log
+Backend/db/chromadb/chroma.sqlite3📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| *.logBackend/db/chromadb/chroma.sqlite3 | |
| *.log | |
| Backend/db/chromadb/chroma.sqlite3 |
🤖 Prompt for AI Agents
In @.gitignore at line 34, The .gitignore line currently contains a concatenated
pattern "*.logBackend/db/chromadb/chroma.sqlite3" which won't match either
target; split this into two separate ignore entries by replacing the single
concatenated token with two lines: "*.log" and
"Backend/db/chromadb/chroma.sqlite3" so that .log files and the chroma.sqlite3
path are each ignored properly.
| def parse_date(date_str): | ||
| """Parse date string to ISO format.""" | ||
| if not date_str: | ||
| return None | ||
|
|
||
| if not date_str: return None | ||
| today = datetime.now() | ||
| date_str_lower = date_str.lower() | ||
| date_str_lower = date_str.lower().strip() | ||
|
|
||
| if date_str_lower == 'today': | ||
| return today.strftime('%Y-%m-%d') | ||
| elif date_str_lower == 'tomorrow': | ||
| return (today + timedelta(days=1)).strftime('%Y-%m-%d') | ||
| elif date_str_lower == 'next week': | ||
| return (today + timedelta(days=7)).strftime('%Y-%m-%d') | ||
|
|
||
| # Try to parse as MM/DD or MM/DD/YYYY | ||
| try: | ||
| if '/' in date_str: | ||
| parts = date_str.split('/') | ||
| if len(parts) == 2: | ||
| month, day = int(parts[0]), int(parts[1]) | ||
| year = today.year | ||
| if month < today.month or (month == today.month and day < today.day): | ||
| year += 1 | ||
| return f"{year}-{month:02d}-{day:02d}" | ||
| elif len(parts) == 3: | ||
| month, day, year = int(parts[0]), int(parts[1]), int(parts[2]) | ||
| return f"{year}-{month:02d}-{day:02d}" | ||
| except: | ||
| pass | ||
| if date_str_lower == 'next month': | ||
| next_month = (today.month % 12) + 1 | ||
| year = today.year + (1 if today.month == 12 else 0) | ||
| return f"{year}-{next_month:02d}-01" | ||
|
|
||
| # Try to parse as YYYY-MM-DD | ||
| try: | ||
| datetime.strptime(date_str, '%Y-%m-%d') | ||
| return date_str | ||
| except: | ||
| pass | ||
| day_mapping = { | ||
| 'monday': 0, 'mon': 0, 'tuesday': 1, 'tue': 1, | ||
| 'wednesday': 2, 'wed': 2, 'thursday': 3, 'thu': 3, 'thurs': 3, | ||
| 'friday': 4, 'fri': 4, 'saturday': 5, 'sat': 5, 'sunday': 6, 'sun': 6, | ||
| } | ||
|
|
||
| if date_str_lower in day_mapping: | ||
| days_ahead = day_mapping[date_str_lower] - today.weekday() | ||
| if days_ahead <= 0: days_ahead += 7 | ||
| return (today + timedelta(days=days_ahead)).strftime('%Y-%m-%d') | ||
|
|
||
| return None | ||
| return date_str |
There was a problem hiding this comment.
Missing handling for "today" and "tomorrow" keywords.
The date_patterns in parse_appointment_command (lines 39-41) extract "today" and "tomorrow", but parse_date does not handle these cases. When a user says "schedule appointment for today", the function returns the literal string "today" instead of a valid YYYY-MM-DD date, which will break the frontend calendar (expects YYYY-MM-DD per CalendarScreen.jsx).
Proposed fix to add missing date handling
def parse_date(date_str):
- if not date_str: return None
+ if not date_str:
+ return None
today = datetime.now()
date_str_lower = date_str.lower().strip()
+ if date_str_lower == 'today':
+ return today.strftime('%Y-%m-%d')
+
+ if date_str_lower == 'tomorrow':
+ return (today + timedelta(days=1)).strftime('%Y-%m-%d')
+
+ if date_str_lower == 'next week':
+ return (today + timedelta(weeks=1)).strftime('%Y-%m-%d')
if date_str_lower == 'next month':
next_month = (today.month % 12) + 1
year = today.year + (1 if today.month == 12 else 0)
return f"{year}-{next_month:02d}-01"
day_mapping = {
'monday': 0, 'mon': 0, 'tuesday': 1, 'tue': 1,
'wednesday': 2, 'wed': 2, 'thursday': 3, 'thu': 3, 'thurs': 3,
'friday': 4, 'fri': 4, 'saturday': 5, 'sat': 5, 'sunday': 6, 'sun': 6,
}
if date_str_lower in day_mapping:
days_ahead = day_mapping[date_str_lower] - today.weekday()
- if days_ahead <= 0: days_ahead += 7
+ if days_ahead <= 0:
+ days_ahead += 7
return (today + timedelta(days=days_ahead)).strftime('%Y-%m-%d')
return date_strBased on learnings: "Use the fine-tuned SLM model as the primary date extraction method. Implement a robust regex-based fallback to cover cases the model may miss."
🧰 Tools
🪛 Ruff (0.14.14)
84-84: Multiple statements on one line (colon)
(E701)
102-102: Multiple statements on one line (colon)
(E701)
🤖 Prompt for AI Agents
In `@Backend/agent/handlers/appointment.py` around lines 83 - 105, parse_date
currently doesn't handle "today" and "tomorrow", so when
parse_appointment_command extracts those tokens (via date_patterns) the function
returns the raw string and breaks the frontend; update parse_date to detect
"today" and "tomorrow" (case-insensitive) and return ISO dates (YYYY-MM-DD)
computed from datetime.now() / now + 1 day; keep existing handling for weekdays
and "next month". Also verify parse_appointment_command still uses its
date_patterns as primary extractor and that parse_date is the robust fallback
for these keywords.
| llm = Llama( | ||
| model_path="./models/qwen2-0_5b-instruct-q4_k_m.gguf", | ||
| lora_path="./models/adapter_model.bin", | ||
| n_ctx=512, | ||
| n_gpu_layers=-1 | ||
| ) |
There was a problem hiding this comment.
Module-level model initialization blocks imports and uses inconsistent paths.
- Initializing the LLM at module load blocks all imports of this module until the model loads, and raises an unrecoverable exception if the file is missing.
- The adapter path (
adapter_model.bin) differs fromapp.pywhich usesmedical_adapter.gguf.
Consider lazy initialization (as done in app.py with get_llm()) and aligning the adapter filename.
Proposed lazy initialization pattern
from llama_cpp import Llama
+import os
-llm = Llama(
- model_path="./models/qwen2-0_5b-instruct-q4_k_m.gguf",
- lora_path="./models/adapter_model.bin",
- n_ctx=512,
- n_gpu_layers=-1
-)
+_llm = None
+
+def _get_llm():
+ global _llm
+ if _llm is None:
+ base_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "models")
+ _llm = Llama(
+ model_path=os.path.join(base_dir, "qwen2-0_5b-instruct-q4_k_m.gguf"),
+ lora_path=os.path.join(base_dir, "medical_adapter.gguf"), # Align with app.py
+ n_ctx=512,
+ n_gpu_layers=-1
+ )
+ return _llm
def run_llm(prompt: str) -> str:
"""Actual inference logic for medical extraction."""
+ llm = _get_llm()
output = llm(🤖 Prompt for AI Agents
In `@Backend/agent/llm.py` around lines 4 - 9, The module currently instantiates
llm = Llama(...) at import time using model_path and lora_path, which blocks
imports and fails fatally if files are missing; change this to a
lazy-initializer function (similar to app.py's get_llm()) that returns a cached
Llama instance on first call and defers loading until needed, and update the
lora_path value to match app.py's adapter filename ("medical_adapter.gguf") so
paths are consistent; ensure the initializer handles missing files by raising a
clear, catchable exception or returning None instead of crashing during module
import, and reference the Llama class and the module-level llm symbol when
making the replacement.
| response = output['choices'][0]['text'].strip() | ||
| # Ensuring valid JSON structure | ||
| return response + "}" if not response.endswith("}") else response |
There was a problem hiding this comment.
Unsafe access to LLM output structure.
Accessing output['choices'][0]['text'] without validation could raise KeyError or IndexError if the model returns an unexpected response.
Defensive access pattern
- response = output['choices'][0]['text'].strip()
+ choices = output.get('choices', [])
+ if not choices:
+ return "{}"
+ response = choices[0].get('text', '').strip()🤖 Prompt for AI Agents
In `@Backend/agent/llm.py` around lines 19 - 21, The code directly indexes
output['choices'][0]['text'], which can raise KeyError/IndexError; update the
access in the block that computes response (the use of variable output and the
expression output['choices'][0]['text']) to defensively validate the structure:
check that output is a dict, that output.get('choices') is a non-empty list, and
that the first choice is a dict containing a 'text' key (or a
'message'->'content' fallback) before reading it; if any check fails, use a safe
fallback string or raise a clear exception, then apply the existing "ensure
trailing }" logic to that sanitized response.
| import json | ||
| import re | ||
| import pytesseract | ||
| pytesseract.pytesseract.tesseract_cmd = r'/opt/homebrew/bin/tesseract' |
There was a problem hiding this comment.
Hardcoded Tesseract path is macOS-specific.
The path /opt/homebrew/bin/tesseract only works on macOS with Homebrew. This will fail on Linux (typically /usr/bin/tesseract) and Windows deployments.
Proposed environment-based configuration
import pytesseract
-pytesseract.pytesseract.tesseract_cmd = r'/opt/homebrew/bin/tesseract'
+pytesseract.pytesseract.tesseract_cmd = os.environ.get('TESSERACT_CMD', 'tesseract')🤖 Prompt for AI Agents
In `@Backend/app.py` at line 7, Replace the hardcoded macOS path assigned to
pytesseract.pytesseract.tesseract_cmd by reading configuration from the
environment and falling back to a detected/system path: check an environment
variable like TESSERACT_CMD first, then try shutil.which("tesseract") to
discover the binary, and finally fall back to a sensible default (e.g.
"/usr/bin/tesseract") or raise a clear error; update the assignment where
pytesseract.pytesseract.tesseract_cmd is set so it uses this
environment/discovery logic instead of the fixed '/opt/homebrew/bin/tesseract'.
| "react-navigation": "^4.4.4", | ||
| "react-navigation-bottom-tabs": "^1.0.2" | ||
| "react-navigation-bottom-tabs": "^1.0.2", | ||
| "tesseract.js": "^7.0.0" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for tesseract imports in Frontend codebase
rg -i "tesseract" Frontend/ --type=js --type=jsx --type=ts --type=tsxRepository: AOSSIE-Org/BabyNest
Length of output: 90
🏁 Script executed:
#!/bin/bash
# Also check if it's imported in package.json or tsconfig
find Frontend/ -name "*.json" -o -name "*.config.*" | xargs grep -l "tesseract" 2>/dev/nullRepository: AOSSIE-Org/BabyNest
Length of output: 111
🏁 Script executed:
#!/bin/bash
# Check ScanReportScreen.jsx for OCR implementation
find Frontend/ -name "ScanReportScreen.jsx" -o -name "ScanReportScreen.ts*" -o -name "*ScanReport*" | head -5Repository: AOSSIE-Org/BabyNest
Length of output: 104
🏁 Script executed:
#!/bin/bash
# Search for tesseract in Frontend codebase (without type filters)
rg -i "tesseract" Frontend/Repository: AOSSIE-Org/BabyNest
Length of output: 699
🏁 Script executed:
#!/bin/bash
# Also check package.json and package-lock.json
grep -n "tesseract" Frontend/package.json Frontend/package-lock.jsonRepository: AOSSIE-Org/BabyNest
Length of output: 741
🏁 Script executed:
#!/bin/bash
# Read ScanReportScreen.jsx to see the OCR implementation
cat -n Frontend/src/Screens/ScanReportScreen.jsx | head -100Repository: AOSSIE-Org/BabyNest
Length of output: 3823
Remove unused tesseract.js dependency.
tesseract.js is a web-based OCR library that requires Web Workers and WebAssembly and is incompatible with React Native. It is not imported or used anywhere in the codebase—OCR processing is handled server-side via the /api/ocr-scan endpoint in ScanReportScreen.jsx. This unused dependency should be removed to reduce bundle size and avoid potential build issues.
🤖 Prompt for AI Agents
In `@Frontend/package.json` at line 47, Remove the unused tesseract.js dependency
from package.json by deleting the "tesseract.js": "^7.0.0" entry and then update
the lockfile and node_modules (run npm install or yarn) so the dependency is
actually removed; verify no imports reference tesseract.js in the codebase
(e.g., ScanReportScreen.jsx) and run the build to confirm there are no leftover
references or bundling issues.
| // Using your actual machine IP for simulator-to-backend connectivity | ||
| const BASE_URL = 'http://10.72.82.230:5050'; |
There was a problem hiding this comment.
Hardcoded IP address will break for all other developers and production.
The BASE_URL is hardcoded with a specific machine IP. Other files in this codebase use import { BASE_URL } from '@env'; to get the environment-configured URL.
🔧 Proposed fix
-// Using your actual machine IP for simulator-to-backend connectivity
-const BASE_URL = 'http://10.72.82.230:5050';
+import { BASE_URL } from '@env';🤖 Prompt for AI Agents
In `@Frontend/src/Screens/ScanReportScreen.jsx` around lines 5 - 6, The file
ScanReportScreen.jsx currently defines a hardcoded const BASE_URL which will
break other developers and production; remove that hardcoded declaration and
instead import the environment value used elsewhere (e.g., add import { BASE_URL
} from '@env'; at the top of ScanReportScreen.jsx) and keep all existing uses of
BASE_URL unchanged; ensure the project .env entries are present or documented so
the env-based BASE_URL is available at runtime.
| const pickImage = () => { | ||
| ImagePicker.launchImageLibrary({ mediaType: 'photo', quality: 0.5 }, async (response) => { | ||
| if (response.assets) { | ||
| setLoading(true); | ||
| const imageUri = response.assets[0].uri; | ||
| uploadAndScan(imageUri); | ||
| } | ||
| }); | ||
| }; |
There was a problem hiding this comment.
Double image picker launch causes user to select image twice.
pickImage() launches the image library and calls uploadAndScan(uri), but uploadAndScan() ignores the uri parameter and launches the image picker again on line 61. The user must select an image twice for a single scan operation.
Remove the redundant picker launch and use the already-selected image.
🐛 Proposed fix - refactor to single image selection
const pickImage = () => {
- ImagePicker.launchImageLibrary({ mediaType: 'photo', quality: 0.5 }, async (response) => {
- if (response.assets) {
- setLoading(true);
- const imageUri = response.assets[0].uri;
- uploadAndScan(imageUri);
- }
- });
-};
-
-const uploadAndScan = async (uri) => {
-setLoading(true);
-try {
- // Base64 conversion using react-native-image-picker capability
const options = {
mediaType: 'photo',
- quality: 0.8,
+ quality: 0.8,
includeBase64: true,
};
ImagePicker.launchImageLibrary(options, async (response) => {
if (response.didCancel || response.errorCode) {
- setLoading(false);
return;
}
-
+
+ setLoading(true);
const base64Image = response.assets[0].base64;
-
- const res = await fetch(`${BASE_URL}/api/ocr-scan`, {
- method: 'POST',
- headers: {
- 'Content-Type': 'application/json',
- },
- body: JSON.stringify({ image: base64Image }),
- });
-
- const data = await res.json();
- setLoading(false);
-
- if (data.status === "success") {
- const { weight, bp, appointment } = data.extracted_values;
- Alert.alert(
- "Verify Extracted Data",
- `Weight: ${weight || "N/A"} kg\nBP: ${bp || "N/A"}\nNext Appt: ${appointment || "N/A"}`,
- [
- { text: "Discard", style: "destructive" },
- { text: "Confirm & Save", onPress: () => saveToDatabase({ weight, bp, appointment }) }
- ]
- );
- } else {
- Alert.alert("Error", data.message || "Extraction failed");
+
+ try {
+ const res = await fetch(`${BASE_URL}/api/ocr-scan`, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ image: base64Image }),
+ });
+
+ const data = await res.json();
+ setLoading(false);
+
+ if (data.status === "success") {
+ const { weight, bp, appointment } = data.extracted_values;
+ Alert.alert(
+ "Verify Extracted Data",
+ `Weight: ${weight || "N/A"} kg\nBP: ${bp || "N/A"}\nNext Appt: ${appointment || "N/A"}`,
+ [
+ { text: "Discard", style: "destructive" },
+ { text: "Confirm & Save", onPress: () => saveToDatabase({ weight, bp, appointment }) }
+ ]
+ );
+ } else {
+ Alert.alert("Error", data.message || "Extraction failed");
+ }
+ } catch (err) {
+ setLoading(false);
+ Alert.alert("Network Error", "Verify backend is running and reachable.");
}
});
-} catch (err) {
- setLoading(false);
- Alert.alert("Network Error", "Verify backend is running and reachable.");
-}
-};
+ };Also applies to: 51-98
🤖 Prompt for AI Agents
In `@Frontend/src/Screens/ScanReportScreen.jsx` around lines 11 - 19, The image
picker is launched twice: pickImage() already gets imageUri and calls
uploadAndScan(imageUri), but uploadAndScan(...) (the function named
uploadAndScan) ignores that parameter and re-opens the picker; remove the
redundant ImagePicker.launchImageLibrary call inside uploadAndScan and make
uploadAndScan use the passed-in uri (e.g., imageUri) to continue the flow (set
loading/state and upload/scan). Update uploadAndScan's signature and internal
logic to accept and operate on the uri parameter and delete the secondary picker
invocation so a single selection completes the scan.
| if (weight && weight !== "N/A") { | ||
| await fetch(`${BASE_URL}/api/weight`, { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ weight: weight, user_id: 'default' }), | ||
| }); | ||
| } | ||
|
|
||
| if (bp && bp !== "N/A") { | ||
| await fetch(`${BASE_URL}/api/blood-pressure`, { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ bp: bp, user_id: 'default' }), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Missing HTTP response status validation.
The fetch calls don't check response.ok before proceeding. If the server returns 4xx/5xx errors, the code will still show "Success" alert or fail silently when parsing the response.
🛡️ Proposed fix
if (weight && weight !== "N/A") {
- await fetch(`${BASE_URL}/api/weight`, {
+ const weightRes = await fetch(`${BASE_URL}/api/weight`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ weight: weight, user_id: 'default' }),
});
+ if (!weightRes.ok) throw new Error('Failed to save weight');
}
if (bp && bp !== "N/A") {
- await fetch(`${BASE_URL}/api/blood-pressure`, {
+ const bpRes = await fetch(`${BASE_URL}/api/blood-pressure`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ bp: bp, user_id: 'default' }),
});
+ if (!bpRes.ok) throw new Error('Failed to save blood pressure');
}🤖 Prompt for AI Agents
In `@Frontend/src/Screens/ScanReportScreen.jsx` around lines 27 - 41, The POST
requests that send weight and blood-pressure (the fetch calls in
ScanReportScreen.jsx) do not validate HTTP responses; update the logic in the
function containing those fetches (the weight and bp POST blocks) to capture
each fetch's Response, check response.ok, and handle non-ok statuses by reading
response.text/json and throwing or returning an error to surface to the caller;
on success continue as before. Ensure both the weight POST and blood-pressure
POST (the fetch for `${BASE_URL}/api/weight` and
`${BASE_URL}/api/blood-pressure`) have identical response.ok checks and proper
error handling (e.g., show alert or reject) so 4xx/5xx responses are not treated
as successful.
| const res = await fetch(`${BASE_URL}/api/ocr-scan`, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify({ image: base64Image }), | ||
| }); | ||
|
|
||
| const data = await res.json(); |
There was a problem hiding this comment.
Missing HTTP response status validation for OCR endpoint.
Same issue as above—the OCR fetch doesn't validate res.ok before parsing JSON.
🛡️ Proposed fix
const res = await fetch(`${BASE_URL}/api/ocr-scan`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ image: base64Image }),
});
+ if (!res.ok) {
+ throw new Error(`Server error: ${res.status}`);
+ }
+
const data = await res.json();🤖 Prompt for AI Agents
In `@Frontend/src/Screens/ScanReportScreen.jsx` around lines 69 - 77, The OCR
fetch call using BASE_URL to POST to "/api/ocr-scan" parses res.json() without
validating the HTTP status; update the logic around the fetch (the code that
assigns const res = await fetch(...) and const data = await res.json()) to check
res.ok first and handle non-2xx responses by throwing or returning an error
(include status and response text/json), so downstream code doesn't assume a
successful response; ensure you still parse the error body when available and
surface a helpful message or set an error state instead of blindly using data.
…alth and task modules
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@Backend/routes/blood_pressure.py`:
- Around line 47-49: The update handler currently rejects numeric strings for
week_number; change the validation so when 'week_number' exists, attempt to cast
data['week_number'] to int (e.g., int(data['week_number'])) catching
ValueError/TypeError, then ensure the integer is >= 1; if casting succeeds
replace data['week_number'] with the int so downstream code sees a normalized
integer, otherwise return the same 400 jsonify error. Target the validation
block that checks 'week_number' in data in blood_pressure.py and update that
branch to perform safe casting and replacement.
In `@Backend/routes/profile.py`:
- Around line 52-57: The update_user_profile flow is storing lmp directly into
dueDate; update this so dueDate is computed from the provided lmp rather than
saved as-is. Modify db_service.update_user_profile (or, if you prefer, compute
before calling it in the route) to mirror the logic in
db_service.set_user_profile: parse data.get('lmp'), calculate the estimated due
date (e.g., add the standard pregnancy offset used in set_user_profile), and
write that computed dueDate to the DB instead of the raw lmp value while still
storing lmp in its proper field.
In `@Backend/routes/symptoms.py`:
- Around line 16-20: The route handler calling db_service.save_symptom_entry in
Backend/routes/symptoms.py should stop catching broad Exception and returning
str(e); instead catch specific DB/validation exceptions (e.g., sqlite3.Error,
ValueError) around db_service.save_symptom_entry, log the full exception
server-side (using the existing logger) and return a generic JSON error like
{"error":"Internal Server Error"} with a 500 status; update the except block in
the save_symptom_entry handler (and mirror this change in the other route
handlers that catch Exception such as in appointments.py, tasks.py, profile.py,
medicine.py, discharge.py, blood_pressure.py) so sensitive details are never
sent to clients but are preserved in server logs for debugging.
In `@Backend/routes/tasks.py`:
- Around line 24-30: Validate the incoming JSON in add_task before calling
db_service.save_task: ensure request.json exists and contains the required keys
"title", "content", "starting_week", and "ending_week" (and validate basic
types/format if needed), and if any are missing or invalid return a 400 JSON
response with a clear message listing the missing/invalid fields; only call
db_service.save_task(request.json) after validation to avoid KeyError in the
db_service layer.
In `@Backend/services/db_service.py`:
- Around line 317-323: The calculate_due_date function currently does
int(cycle_length) unsafely and treats falsy values (including 0 or empty string)
wrong; update calculate_due_date to safely parse cycle_length: if cycle_length
is None, empty, non-numeric, or <= 0, use 28 as the default cycle length;
otherwise convert to int within a try/except to catch ValueError and fall back
to 28; compute adjustment = parsed_cycle - 28 and then continue computing
due_date from lmp_str as before (use the existing lmp_str parsing and timedelta
logic in calculate_due_date).
- Around line 359-371: The update_user_profile function is writing
data.get('lmp') directly to the dueDate column instead of computing it; change
update_user_profile to compute dueDate via calculate_due_date(lmp, cycleLength)
(mirroring set_user_profile) — use data.get('lmp') and data.get('cycleLength')
(or fetch existing cycleLength from the profile if missing) to produce the due
date, then persist that computed value to the dueDate column; keep the rest of
the flow (db.execute, db.commit, get_agent(DB_PATH) and agent.update_cache(...))
unchanged.
- Around line 197-247: The appointments CRUD functions are missing AI cache sync
calls; after committing in save_appointment, update_appointment_entry, and
delete_appointment_entry, call agent.update_cache() (matching how other modules
do) to refresh the AI cache for the "appointments" entity; ensure agent is
imported at the top of Backend/services/db_service.py if not already and place
the agent.update_cache() call immediately after db.commit() in each of the three
functions (save_appointment, update_appointment_entry,
delete_appointment_entry).
🧹 Nitpick comments (5)
Backend/routes/appointments.py (1)
6-66: Consider centralizing error handling for these endpoints.The service-layer wiring and validations are solid; consolidating try/except logic (e.g., via a shared helper or error handler) will reduce repetition and make it easier to apply the safer error-response pattern noted earlier.
Backend/routes/profile.py (1)
36-37: Magic index access is fragile; use column names instead.Accessing profile data by numeric indices (
profile[7],profile[6]) is error-prone and will break silently if the schema changes. Sincedb_service.get_profile_data()returns asqlite3.Row, you can access columns by name directly.♻️ Proposed fix
return jsonify({ - "due_date": profile[7], # dueDate index - "location": profile[6] # user_location index + "due_date": profile['dueDate'], + "location": profile['user_location'] }), 200Backend/routes/tasks.py (1)
44-50:delete_taskreturns success even if task doesn't exist.Unlike
update_taskandmove_to_appointmentwhich verify the task exists before operating,delete_taskunconditionally returns 200. This is inconsistent with the API behavior elsewhere and may mask client errors.♻️ Proposed fix for consistent 404 handling
`@tasks_bp.route`('/delete_task/<int:task_id>', methods=['DELETE']) def delete_task(task_id): try: + existing = db_service.get_task_by_id(task_id) + if not existing: + return jsonify({"error": "Task not found"}), 404 + db_service.delete_task_entry(task_id) return jsonify({"status": "success", "message": "Task deleted"}), 200 except Exception as e: return jsonify({"error": "Database Error", "details": str(e)}), 500Backend/services/db_service.py (2)
313-313: Move import to top of file.The
datetimeimport on line 313 should be at the top of the file with other imports per PEP 8 conventions.♻️ Proposed fix
import os import sqlite3 +from datetime import datetime, timedelta from db.db import open_db from agent.agent import get_agentThen remove line 313.
447-459:save_taskaccesses required keys without validation.Lines 454-456 access
data['title'],data['content'],data['starting_week'], anddata['ending_week']directly, which raisesKeyErrorif any are missing. The service layer should either validate inputs or document that callers must provide these fields.♻️ Proposed fix with validation
def save_task(data): """Inserts a new task into the database.""" + required = ['title', 'content', 'starting_week', 'ending_week'] + missing = [f for f in required if f not in data] + if missing: + raise ValueError(f"Missing required fields: {', '.join(missing)}") + db = open_db() db.execute(
| # Optional Validation | ||
| if 'week_number' in data and (not isinstance(data['week_number'], int) or data['week_number'] < 1): | ||
| return jsonify({"error": "Invalid week_number"}), 400 |
There was a problem hiding this comment.
Normalize week_number validation to accept numeric strings.
The update path currently rejects "week_number": "5" even though create accepts it. Casting and validating avoids breaking clients that send numeric strings.
✅ Proposed fix
- if 'week_number' in data and (not isinstance(data['week_number'], int) or data['week_number'] < 1):
- return jsonify({"error": "Invalid week_number"}), 400
+ if 'week_number' in data:
+ try:
+ week = int(data['week_number'])
+ except (ValueError, TypeError):
+ return jsonify({"error": "Invalid week_number"}), 400
+ if week < 1:
+ return jsonify({"error": "Invalid week_number"}), 400
+ data['week_number'] = week🤖 Prompt for AI Agents
In `@Backend/routes/blood_pressure.py` around lines 47 - 49, The update handler
currently rejects numeric strings for week_number; change the validation so when
'week_number' exists, attempt to cast data['week_number'] to int (e.g.,
int(data['week_number'])) catching ValueError/TypeError, then ensure the integer
is >= 1; if casting succeeds replace data['week_number'] with the int so
downstream code sees a normalized integer, otherwise return the same 400 jsonify
error. Target the validation block that checks 'week_number' in data in
blood_pressure.py and update that branch to perform safe casting and
replacement.
| data = request.json | ||
| if not data.get('lmp') or not data.get('location'): | ||
| return jsonify({"error": "LMP and location are required"}), 400 | ||
|
|
||
| try: | ||
| db.execute('SELECT * FROM profile') | ||
| data = request.json | ||
| lmp = data.get('lmp') | ||
| cycle_length = data.get('cycle_length') | ||
| period_length = data.get('period_length') | ||
| age = data.get('age') | ||
| weight = data.get('weight') | ||
| location = data.get('location') | ||
|
|
||
| if not lmp or not location: | ||
| return jsonify({"error": "Last menstrual period and location are required"}), 400 | ||
|
|
||
| db.execute( | ||
| 'UPDATE profile SET due_date = ?, user_location = ?', | ||
| (lmp, cycle_length, period_length, age, weight, location) | ||
| ) | ||
| db.commit() | ||
|
|
||
| # Update cache after database update | ||
| db_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "db", "database.db") | ||
| agent = get_agent(db_path) | ||
| agent.update_cache(data_type="profile", operation="update") | ||
|
|
||
| db_service.update_user_profile(data) |
There was a problem hiding this comment.
Semantic mismatch: lmp is being stored as dueDate.
The route validates that lmp and location are required, but db_service.update_user_profile() (lines 363-366 in db_service.py) stores data.get('lmp') directly into the dueDate column. This appears incorrect—the due date should be calculated from LMP, not be the LMP value itself.
Either recalculate the due date here before calling the service, or fix the service layer to compute the due date from LMP similar to set_user_profile.
🤖 Prompt for AI Agents
In `@Backend/routes/profile.py` around lines 52 - 57, The update_user_profile flow
is storing lmp directly into dueDate; update this so dueDate is computed from
the provided lmp rather than saved as-is. Modify db_service.update_user_profile
(or, if you prefer, compute before calling it in the route) to mirror the logic
in db_service.set_user_profile: parse data.get('lmp'), calculate the estimated
due date (e.g., add the standard pregnancy offset used in set_user_profile), and
write that computed dueDate to the DB instead of the raw lmp value while still
storing lmp in its proper field.
| try: | ||
| db_service.save_symptom_entry(week, symptom, note) | ||
| return jsonify({"status": "success", "message": "Symptom added"}), 201 | ||
| except Exception as e: | ||
| return jsonify({"error": "Internal Server Error", "details": str(e)}), 500 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the symptoms.py file
find . -name "symptoms.py" -type fRepository: AOSSIE-Org/BabyNest
Length of output: 128
🏁 Script executed:
# List the Backend/routes directory to understand the structure
ls -la Backend/routes/ 2>/dev/null || echo "Path not found, searching for routes directory"
fd -t f "\.py$" Backend/routes/ 2>/dev/null | head -20Repository: AOSSIE-Org/BabyNest
Length of output: 888
🏁 Script executed:
# Search for the symptom-related files
fd -t f "symptoms" . --type fRepository: AOSSIE-Org/BabyNest
Length of output: 170
🏁 Script executed:
# Read the symptoms.py file to verify lines 16-20
cat -n Backend/routes/symptoms.pyRepository: AOSSIE-Org/BabyNest
Length of output: 2384
🏁 Script executed:
# Search for all Exception catches in Backend/routes to find similar patterns
rg -n "except Exception" Backend/routes/Repository: AOSSIE-Org/BabyNest
Length of output: 1154
🏁 Script executed:
# Check what the other route handlers return with their Exception catches
rg -A 1 "except Exception" Backend/routes/ | head -60Repository: AOSSIE-Org/BabyNest
Length of output: 3008
🏁 Script executed:
# Get specific exception handling patterns in other files
rg -B 1 -A 2 "except Exception" Backend/routes/appointments.pyRepository: AOSSIE-Org/BabyNest
Length of output: 979
🏁 Script executed:
# Count total exceptions exposed across routes to quantify scope
rg "except Exception" Backend/routes/ | wc -lRepository: AOSSIE-Org/BabyNest
Length of output: 65
Avoid returning raw exception details from API responses across all route handlers.
Catching Exception and returning str(e) exposes internal database details, stack traces, and system information—creating a security and information disclosure risk. This pattern is systemic across 19 exception handlers in Backend/routes (appointments.py, tasks.py, profile.py, medicine.py, discharge.py, blood_pressure.py, symptoms.py). Catch specific exceptions (e.g., sqlite3.Error), log server-side for debugging, and return generic client-facing error messages.
🔧 Proposed fix (example)
-from flask import Blueprint, request, jsonify
+from flask import Blueprint, request, jsonify, current_app
+import sqlite3
@@
- except Exception as e:
- return jsonify({"error": "Internal Server Error", "details": str(e)}), 500
+ except sqlite3.Error:
+ current_app.logger.exception("Failed to save symptom entry")
+ return jsonify({"error": "Database Error"}), 500🧰 Tools
🪛 Ruff (0.14.14)
19-19: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@Backend/routes/symptoms.py` around lines 16 - 20, The route handler calling
db_service.save_symptom_entry in Backend/routes/symptoms.py should stop catching
broad Exception and returning str(e); instead catch specific DB/validation
exceptions (e.g., sqlite3.Error, ValueError) around
db_service.save_symptom_entry, log the full exception server-side (using the
existing logger) and return a generic JSON error like {"error":"Internal Server
Error"} with a 500 status; update the except block in the save_symptom_entry
handler (and mirror this change in the other route handlers that catch Exception
such as in appointments.py, tasks.py, profile.py, medicine.py, discharge.py,
blood_pressure.py) so sensitive details are never sent to clients but are
preserved in server logs for debugging.
| @tasks_bp.route('/add_task', methods=['POST']) | ||
| def add_task(): | ||
| db = open_db() | ||
| try: | ||
| data = request.json | ||
|
|
||
| db.execute( | ||
| 'INSERT INTO tasks (title, content, starting_week, ending_week, task_status, task_priority, isOptional, isAppointmentMade) VALUES (?, ?, ?, ?, ?, ?, ?, ?)', | ||
| (data['title'], data['content'], data['starting_week'], data['ending_week'], | ||
| data.get('task_status', 'pending'), data.get('task_priority', 'low'), int(data.get('isOptional', False)), int(data.get('isAppointmentMade', False))) | ||
| ) | ||
| db.commit() | ||
| return jsonify({"status": "success", "message": "Task added"}), 200 | ||
|
|
||
| except sqlite3.OperationalError as e: | ||
| print(f"Database Error: {e}") # Print the exact error | ||
| db_service.save_task(request.json) | ||
| return jsonify({"status": "success", "message": "Task added"}), 201 | ||
| except Exception as e: | ||
| return jsonify({"error": "Database Error", "details": str(e)}), 500 |
There was a problem hiding this comment.
Missing input validation for required task fields.
add_task passes request.json directly to db_service.save_task() without validating required fields. Looking at the service layer (lines 447-459 in db_service.py), title, content, starting_week, and ending_week are accessed directly via data['key'], which will raise KeyError if missing. This results in a generic 500 error instead of a descriptive 400.
🛡️ Proposed fix
`@tasks_bp.route`('/add_task', methods=['POST'])
def add_task():
try:
+ data = request.json
+ required = ['title', 'content', 'starting_week', 'ending_week']
+ if not all(data.get(f) for f in required):
+ return jsonify({"error": "Missing required fields: title, content, starting_week, ending_week"}), 400
+
- db_service.save_task(request.json)
+ db_service.save_task(data)
return jsonify({"status": "success", "message": "Task added"}), 201
except Exception as e:
return jsonify({"error": "Database Error", "details": str(e)}), 500🧰 Tools
🪛 Ruff (0.14.14)
29-29: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@Backend/routes/tasks.py` around lines 24 - 30, Validate the incoming JSON in
add_task before calling db_service.save_task: ensure request.json exists and
contains the required keys "title", "content", "starting_week", and
"ending_week" (and validate basic types/format if needed), and if any are
missing or invalid return a 400 JSON response with a clear message listing the
missing/invalid fields; only call db_service.save_task(request.json) after
validation to avoid KeyError in the db_service layer.
| def get_all_appointments(): | ||
| """Fetches all appointments from the database.""" | ||
| db = open_db() | ||
| rows = db.execute('SELECT * FROM appointments').fetchall() | ||
| return [dict(row) for row in rows] | ||
|
|
||
| def get_appointment_by_id(appt_id): | ||
| """Retrieves a specific appointment by its ID.""" | ||
| db = open_db() | ||
| return db.execute('SELECT * FROM appointments WHERE id = ?', (appt_id,)).fetchone() | ||
|
|
||
| def save_appointment(title, content, date, time, location): | ||
| """Inserts a new appointment with a default 'pending' status.""" | ||
| db = open_db() | ||
| db.execute( | ||
| '''INSERT INTO appointments | ||
| (title, content, appointment_date, appointment_time, appointment_location, appointment_status) | ||
| VALUES (?, ?, ?, ?, ?, ?)''', | ||
| (title, content, date, time, location, 'pending') | ||
| ) | ||
| db.commit() | ||
| return True | ||
|
|
||
| def update_appointment_entry(appt_id, data, existing): | ||
| """Updates an existing appointment using provided or existing data.""" | ||
| db = open_db() | ||
|
|
||
| # Extract values with fallbacks to maintain data integrity | ||
| title = data.get('title', existing['title']) | ||
| content = data.get('content', existing['content']) | ||
| date = data.get('appointment_date', existing['appointment_date']) | ||
| time = data.get('appointment_time', existing['appointment_time']) | ||
| loc = data.get('appointment_location', existing['appointment_location']) | ||
| status = data.get('appointment_status', existing['appointment_status']) | ||
|
|
||
| db.execute( | ||
| '''UPDATE appointments SET | ||
| title = ?, content = ?, appointment_date = ?, appointment_time = ?, | ||
| appointment_location = ?, appointment_status = ? | ||
| WHERE id = ?''', | ||
| (title, content, date, time, loc, status, appt_id) | ||
| ) | ||
| db.commit() | ||
| return True | ||
|
|
||
| def delete_appointment_entry(appt_id): | ||
| """Removes an appointment record from the database.""" | ||
| db = open_db() | ||
| db.execute('DELETE FROM appointments WHERE id = ?', (appt_id,)) | ||
| db.commit() | ||
| return True |
There was a problem hiding this comment.
Appointments CRUD functions lack AI cache synchronization.
All other entity modules (weight, blood_pressure, medicine, discharge, profile, symptoms) call agent.update_cache() after create/update/delete operations. The appointments functions (save_appointment, update_appointment_entry, delete_appointment_entry) do not, which creates inconsistency in AI cache state.
🛡️ Proposed fix for save_appointment
def save_appointment(title, content, date, time, location):
"""Inserts a new appointment with a default 'pending' status."""
db = open_db()
db.execute(
'''INSERT INTO appointments
(title, content, appointment_date, appointment_time, appointment_location, appointment_status)
VALUES (?, ?, ?, ?, ?, ?)''',
(title, content, date, time, location, 'pending')
)
db.commit()
+
+ agent = get_agent(DB_PATH)
+ agent.update_cache(data_type="appointments", operation="create")
return TrueApply similar changes to update_appointment_entry and delete_appointment_entry.
🤖 Prompt for AI Agents
In `@Backend/services/db_service.py` around lines 197 - 247, The appointments CRUD
functions are missing AI cache sync calls; after committing in save_appointment,
update_appointment_entry, and delete_appointment_entry, call
agent.update_cache() (matching how other modules do) to refresh the AI cache for
the "appointments" entity; ensure agent is imported at the top of
Backend/services/db_service.py if not already and place the agent.update_cache()
call immediately after db.commit() in each of the three functions
(save_appointment, update_appointment_entry, delete_appointment_entry).
| def calculate_due_date(lmp_str, cycle_length): | ||
| """Calculates pregnancy due date based on LMP and cycle length.""" | ||
| lmp_date = datetime.strptime(lmp_str, "%Y-%m-%d") | ||
| # Standard: LMP + 280 days for 28-day cycle. Adjust if cycle differs | ||
| adjustment = int(cycle_length) - 28 if cycle_length else 0 | ||
| due_date = lmp_date + timedelta(days=280 + adjustment) | ||
| return due_date.strftime("%Y-%m-%d") |
There was a problem hiding this comment.
calculate_due_date will crash if cycle_length is None or non-numeric.
Line 321 calls int(cycle_length) when cycle_length is truthy, but if the caller passes an empty string or non-numeric value, this raises ValueError. The ternary also doesn't handle cycle_length=0 correctly (would use 0 instead of default 28).
🛡️ Proposed fix with safer parsing
def calculate_due_date(lmp_str, cycle_length):
"""Calculates pregnancy due date based on LMP and cycle length."""
lmp_date = datetime.strptime(lmp_str, "%Y-%m-%d")
# Standard: LMP + 280 days for 28-day cycle. Adjust if cycle differs
- adjustment = int(cycle_length) - 28 if cycle_length else 0
+ try:
+ cycle = int(cycle_length) if cycle_length else 28
+ except (ValueError, TypeError):
+ cycle = 28
+ adjustment = cycle - 28
due_date = lmp_date + timedelta(days=280 + adjustment)
return due_date.strftime("%Y-%m-%d")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def calculate_due_date(lmp_str, cycle_length): | |
| """Calculates pregnancy due date based on LMP and cycle length.""" | |
| lmp_date = datetime.strptime(lmp_str, "%Y-%m-%d") | |
| # Standard: LMP + 280 days for 28-day cycle. Adjust if cycle differs | |
| adjustment = int(cycle_length) - 28 if cycle_length else 0 | |
| due_date = lmp_date + timedelta(days=280 + adjustment) | |
| return due_date.strftime("%Y-%m-%d") | |
| def calculate_due_date(lmp_str, cycle_length): | |
| """Calculates pregnancy due date based on LMP and cycle length.""" | |
| lmp_date = datetime.strptime(lmp_str, "%Y-%m-%d") | |
| # Standard: LMP + 280 days for 28-day cycle. Adjust if cycle differs | |
| try: | |
| cycle = int(cycle_length) if cycle_length else 28 | |
| except (ValueError, TypeError): | |
| cycle = 28 | |
| adjustment = cycle - 28 | |
| due_date = lmp_date + timedelta(days=280 + adjustment) | |
| return due_date.strftime("%Y-%m-%d") |
🤖 Prompt for AI Agents
In `@Backend/services/db_service.py` around lines 317 - 323, The
calculate_due_date function currently does int(cycle_length) unsafely and treats
falsy values (including 0 or empty string) wrong; update calculate_due_date to
safely parse cycle_length: if cycle_length is None, empty, non-numeric, or <= 0,
use 28 as the default cycle length; otherwise convert to int within a try/except
to catch ValueError and fall back to 28; compute adjustment = parsed_cycle - 28
and then continue computing due_date from lmp_str as before (use the existing
lmp_str parsing and timedelta logic in calculate_due_date).
| def update_user_profile(data): | ||
| """Updates the profile fields and synchronizes AI cache.""" | ||
| db = open_db() | ||
| # Note: Using the provided location and due date (LMP mapping in current route) | ||
| db.execute( | ||
| 'UPDATE profile SET dueDate = ?, user_location = ?', | ||
| (data.get('lmp'), data.get('location')) | ||
| ) | ||
| db.commit() | ||
|
|
||
| agent = get_agent(DB_PATH) | ||
| agent.update_cache(data_type="profile", operation="update") | ||
| return True |
There was a problem hiding this comment.
update_user_profile stores lmp as dueDate instead of computing it.
The function stores data.get('lmp') directly into the dueDate column (line 365), but dueDate should be calculated from LMP using calculate_due_date(), similar to set_user_profile. This will cause incorrect due dates to be stored.
🐛 Proposed fix
def update_user_profile(data):
"""Updates the profile fields and synchronizes AI cache."""
db = open_db()
- # Note: Using the provided location and due date (LMP mapping in current route)
+ # Recalculate due date from LMP
+ due_date = calculate_due_date(data.get('lmp'), data.get('cycleLength'))
db.execute(
'UPDATE profile SET dueDate = ?, user_location = ?',
- (data.get('lmp'), data.get('location'))
+ (due_date, data.get('location'))
)
db.commit()Note: This fix assumes cycleLength is passed in the data; you may need to fetch the existing profile's cycle length if not provided.
🤖 Prompt for AI Agents
In `@Backend/services/db_service.py` around lines 359 - 371, The
update_user_profile function is writing data.get('lmp') directly to the dueDate
column instead of computing it; change update_user_profile to compute dueDate
via calculate_due_date(lmp, cycleLength) (mirroring set_user_profile) — use
data.get('lmp') and data.get('cycleLength') (or fetch existing cycleLength from
the profile if missing) to produce the due date, then persist that computed
value to the dueDate column; keep the rest of the flow (db.execute, db.commit,
get_agent(DB_PATH) and agent.update_cache(...)) unchanged.
|
@pranjal9091 |
@pranjal9091
Could you explain what you change made in it.
|

📝 Overview
This PR introduces a comprehensive Service Layer Architecture across the entire backend. By decoupling business logic (Routes) from data access logic (SQL/AI Sync), the codebase is now significantly more modular, testable, and maintainable.
🎯 Key Achievements
Full Backend Refactoring: Standardized the following modules to use the new service layer:
Health Tracking: Weight, Blood Pressure, Symptoms, and Discharge Logs.
Clinical & Daily Management: Medicine, Appointments, and Tasks.
User Data: Profile management, including pregnancy due date calculation.
Logic Centralization:
Moved complex logic like calculate_due_date from routes to db_service for better reusability.
Integrated atomic operations for specialized tasks, such as moving a task to an appointment.
Code Quality & Maintenance:
Removed redundant sqlite3, get_agent, and os/sys imports from all route files.
Eliminated repetitive database connection and error handling boilerplate.
🚀 Scalability
The backend is now prepared for future database migrations (e.g., SQLite to PostgreSQL) as all SQL logic is isolated within db_service.py. This pattern is now the gold standard for adding any new features to BabyNest.
🧪 Testing Performed
Verified CRUD operations for all refactored modules via API testing.
Confirmed that AI Agent cache updates correctly trigger after every database write operation.