Production repository for LabAssist
LabAssist is an AI-powered laboratory assistant designed to help students perform chemistry experiments more accurately while reducing teacher workload. The system leverages cutting-edge computer vision techniques to detect and provide real-time feedback on common mistakes during laboratory procedures, with an initial focus on titration experiments.
In school laboratories, teachers face significant challenges:
- Overwhelming Class Sizes: Monitoring over 30 students simultaneously during experiments is demanding.
- Complexity of Procedures: Each experiment requires attention to unique steps, making it hard to track errors across multiple activities.
- Subtle Procedural Mistakes: Errors like neglecting to place a white tile under a conical flask often go unnoticed.
- Safety Compliance: Ensuring all students adhere to safety protocols while providing individual attention is challenging.
- Utilises advanced AI to identify and categorise mistakes during laboratory experiments.
- Object Detection: Powered by YOLOv10m, recognises laboratory equipment and safety gear.
- Action Detection: Employs X3D_M to analyse procedural execution (e.g., swirling technique).
- Timeline View: Chronologically tracks errors.
- Summary Dashboard: Offers a performance overview.
- Error Navigation: One-click access to specific error instances in videos.
-
Object Detection Model:
- Built on YOLOv10m architecture.
- Trained on a dataset augmented to 22,500 images.
- Detects 9 key objects: beaker, burette, pipette, conical flask, volumetric flask, funnel, white tile, face, and lab goggles.
-
Action Detection Model:
- Based on PyTorchVideo’s X3D_M.
- Processes temporal data for sequential action detection.
- Built with React.
- Features an interactive timeline and summary dashboard.
- Optimised for user-friendly video playback and analysis.
- Object Detection: Achieved >90% mAP50 across all classes, with standout accuracies of 99% for conical flasks and 95% for burettes.
- Action Detection: Averaged 95% accuracy across swirling techniques (correct, incorrect, none).
- Improved reliability by expanding object classes from 4 to 9.
- Reduced false positives and negatives, especially for visually similar apparatus.
- Boosting technique enhanced accuracy by chaining object detection outputs as preprocessing for action detection.
- Reduced processing time while maintaining high prediction reliability.
- Implemented multiprocessing for concurrent video loading and inference.
- Achieved a 7.7x improvement in processing speed.
- Transitioned from a desktop application to a web-based platform.
- Accessible via any device, eliminating setup complexities.
- Highlights errors on a timeline and provides a checklist for performance review.
- Expanding detection capabilities to other experiments like separation techniques and salt preparation.
- Optimising mobile compatibility for seamless video uploads.
- Scaling backend to handle higher workloads for large-scale deployment.
- Install & setup docker desktop
- Run
docker compose pull
from this directory - Run
docker compose up
to start the application - Open
http://localhost:3000
to view
Note:
- This application requires a Compute Capability of >3.5 (Minimum GTX 1050 and above), GPU must be Nvidia
- Docker engine must be initialized to run the application (open docker desktop)
- If you get an error like:
open //./pipe/dockerDesktopLinuxEngine: The system cannot find the file specified.
- Open a new terminal
- Run
cd "C:\Program Files\Docker\Docker"
- Run
./DockerCli.exe -SwitchLinuxEngine