This repository is under active development. Implementation details, APIs, and documentation are subject to change.
This research investigates knowledge distillation from LM-TAD into HOSER for navigational trajectory generation.
Research Goal: Compress a large language model teacher into a deployment-ready student while retaining trajectory generation quality.
- 📘 Distillation Methodology - Complete technical guide
- 🏗️ Architecture Specification - Complete model architecture details
- 💾 Checkpoint Strategy - Model saving and loading guide
- 📊 Teacher Baseline - Performance metrics
- ✅ Vocabulary Mapping - Mapping validation
- 🔍 Search Method Selection - A* vs Beam Search guidance
- 📈 Evaluation Comparison - Cross-dataset analysis
- 📊 Paired Statistical Tests - Model comparison methodology
- 🚗 Abnormal OD Workflow - Complete guide for abnormal trajectory analysis
- 🔧 Tools & Programmatic Interfaces - How to use tools as modules
- 📊 Evaluation Pipeline - Standard evaluation workflow
- 🎨 Visualization Guide - Creating publication-quality figures
This work builds upon open source implementations:
- HOSER (student architecture): caoji2001/HOSER
- LM-TAD (teacher model): jonathankabala/LMTAD
Issues: GitHub Issues
For questions about original works:
- HOSER: caoji2001/HOSER
- LM-TAD: jonathankabala/LMTAD
This research fork maintains compatibility with original HOSER and LM-TAD licenses. See respective repositories for details.