Automated investigation and analysis of lip-sync and facial animation models from GitHub repositories using AI-powered analysis.
Get up and running in 3 simple steps:
-
Clone and configure
git clone https://github.com/428lab/lipsync-investigation.git cd lipsync-investigation cp env.example .env # Edit .env with your API keys
-
Run with Docker (recommended)
docker compose up
-
Get results
# Check the output directory ls output/ # View your analysis results cat output/lipsync_models_analysis.csv
Lipsync Investigation automatically discovers, analyzes, and evaluates GitHub repositories containing lip-sync and facial animation models. It combines:
- π Smart Discovery: Uses GitHub's search API to find relevant repositories
- π€ AI Analysis: Leverages Large Language Models (LLM) for deep repository analysis
- β‘ Heuristic Analysis: Fast rule-based analysis for quick insights
- πΎ Intelligent Caching: Reduces API costs and improves performance
- π Checkpointing: Resume interrupted analyses without losing progress
- Multi-Model Analysis: Supports OpenAI GPT models and OpenRouter
- Comprehensive Evaluation: 20+ analysis metrics including code quality, documentation, and model maturity
- Performance Modes: Fast, Balanced, and Accurate analysis modes
- Docker Support: Easy deployment and consistent environments
- CSV Export: Structured data output for further analysis
- Rate Limiting: Respects GitHub API limits with intelligent throttling
- Error Recovery: Robust error handling and retry mechanisms
The tool evaluates repositories across multiple dimensions:
- Lip-sync Model Detection: Identifies if repository contains runnable lip-sync models
- Video Input Support: Checks for video file processing capabilities
- Docker Support: Evaluates containerization readiness
- Code Quality Score: 0-10 rating based on structure, tests, and practices
- Documentation Quality: Poor/Fair/Good/Excellent assessment
- Maintenance Status: Active/Maintained/Stale/Abandoned classification
- Model Architecture: GAN, Diffusion, Transformer, NeRF, etc.
- Training Framework: PyTorch, TensorFlow, JAX, ONNX support
- GPU Requirements: Low/Medium/High resource needs
- Inference Readiness: Production deployment capability
- Paper Association: Links to research publications
- Demo Availability: Live demonstrations and examples
- Commercial Viability: License and quality assessment
- Research Novelty: Innovation and contribution scoring
- Node.js: Version 22.0.0 or higher
- Docker: For containerized deployment (recommended)
- API Keys:
- GitHub Personal Access Token (required)
- OpenAI API Key or OpenRouter API Key (for LLM analysis)
-
Clone the repository
git clone https://github.com/428lab/lipsync-investigation.git cd lipsync-investigation -
Configure environment
cp env.example .env # Edit .env with your API keys -
Run with Docker Compose
docker compose up
-
Install dependencies
npm install
-
Build the project
npm run build
-
Run the application
npm start
The tool is configured via config.json. Key configuration areas:
- GitHub Search: Customize search queries and repository limits
- LLM Settings: Choose provider, model, and analysis fields
- Performance: Select analysis mode and caching options
- Output: Configure CSV format and included columns
π See detailed configuration guide β
# Using Docker (recommended)
docker compose up
# Using npm locally
npm start
# Development mode with auto-reload
npm run devThe tool generates a CSV file with comprehensive analysis results:
model_name,published_month,license,github_url,github_stars,docker_support,video_input_support,is_lipsync_model,code_quality_score,confidence,reasoning
owner/repo-name,2025-01,MIT,https://github.com/owner/repo-name,150,yes,yes,yes,8,0.85,"Strong evidence of lip-sync implementation with PyTorch framework..."
- Logs: Check
output/lipsync-investigation.logfor detailed progress - Checkpoints: Automatic progress saving in
output/checkpoint.json - Resume: Interrupted runs automatically resume from last checkpoint
output/
βββ lipsync_models_analysis.csv # Main analysis results
βββ lipsync-investigation.log # Detailed execution logs
βββ checkpoint.json # Progress checkpoint (auto-generated)
- Getting Started - Detailed setup and first run guide
- Configuration Guide - Complete configuration reference
- Usage Guide - Running instructions and workflows
- Architecture - System overview and components
- Troubleshooting - Common issues and solutions
- API Reference - Complete field and configuration reference
- Examples - Practical examples and use cases
src/
βββ services/ # Core analysis services
βββ types/ # TypeScript type definitions
βββ utils/ # Utility functions and helpers
βββ index.ts # Main application entry point
npm run build # Compile TypeScript to JavaScript
npm start # Run the compiled application
npm run dev # Run in development mode with auto-reloadContributions are welcome! Please see our contributing guidelines and feel free to submit pull requests or open issues for bugs and feature requests.
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Node.js and TypeScript
- Uses GitHub API for repository discovery
- Powered by OpenAI and OpenRouter for AI analysis
- Containerized with Docker
Need help? Check out our troubleshooting guide or open an issue.