Skip to content

428lab/lipsync-investigation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Lipsync Investigation πŸ”

License: MIT Node.js Version Docker

Automated investigation and analysis of lip-sync and facial animation models from GitHub repositories using AI-powered analysis.

πŸš€ Quick Start

Get up and running in 3 simple steps:

  1. Clone and configure

    git clone https://github.com/428lab/lipsync-investigation.git
    cd lipsync-investigation
    cp env.example .env
    # Edit .env with your API keys
  2. Run with Docker (recommended)

    docker compose up
  3. Get results

    # Check the output directory
    ls output/
    # View your analysis results
    cat output/lipsync_models_analysis.csv

πŸ“‹ What This Tool Does

Lipsync Investigation automatically discovers, analyzes, and evaluates GitHub repositories containing lip-sync and facial animation models. It combines:

  • πŸ” Smart Discovery: Uses GitHub's search API to find relevant repositories
  • πŸ€– AI Analysis: Leverages Large Language Models (LLM) for deep repository analysis
  • ⚑ Heuristic Analysis: Fast rule-based analysis for quick insights
  • πŸ’Ύ Intelligent Caching: Reduces API costs and improves performance
  • πŸ”„ Checkpointing: Resume interrupted analyses without losing progress

✨ Key Features

  • Multi-Model Analysis: Supports OpenAI GPT models and OpenRouter
  • Comprehensive Evaluation: 20+ analysis metrics including code quality, documentation, and model maturity
  • Performance Modes: Fast, Balanced, and Accurate analysis modes
  • Docker Support: Easy deployment and consistent environments
  • CSV Export: Structured data output for further analysis
  • Rate Limiting: Respects GitHub API limits with intelligent throttling
  • Error Recovery: Robust error handling and retry mechanisms

πŸ“Š Analysis Metrics

The tool evaluates repositories across multiple dimensions:

Core Capabilities

  • Lip-sync Model Detection: Identifies if repository contains runnable lip-sync models
  • Video Input Support: Checks for video file processing capabilities
  • Docker Support: Evaluates containerization readiness

Quality Metrics

  • Code Quality Score: 0-10 rating based on structure, tests, and practices
  • Documentation Quality: Poor/Fair/Good/Excellent assessment
  • Maintenance Status: Active/Maintained/Stale/Abandoned classification

Technical Depth

  • Model Architecture: GAN, Diffusion, Transformer, NeRF, etc.
  • Training Framework: PyTorch, TensorFlow, JAX, ONNX support
  • GPU Requirements: Low/Medium/High resource needs
  • Inference Readiness: Production deployment capability

Research Value

  • Paper Association: Links to research publications
  • Demo Availability: Live demonstrations and examples
  • Commercial Viability: License and quality assessment
  • Research Novelty: Innovation and contribution scoring

πŸ› οΈ Requirements

  • Node.js: Version 22.0.0 or higher
  • Docker: For containerized deployment (recommended)
  • API Keys:
    • GitHub Personal Access Token (required)
    • OpenAI API Key or OpenRouter API Key (for LLM analysis)

πŸ“¦ Installation

Docker Installation (Recommended)

  1. Clone the repository

    git clone https://github.com/428lab/lipsync-investigation.git
    cd lipsync-investigation
  2. Configure environment

    cp env.example .env
    # Edit .env with your API keys
  3. Run with Docker Compose

    docker compose up

Local Installation

  1. Install dependencies

    npm install
  2. Build the project

    npm run build
  3. Run the application

    npm start

βš™οΈ Configuration

The tool is configured via config.json. Key configuration areas:

  • GitHub Search: Customize search queries and repository limits
  • LLM Settings: Choose provider, model, and analysis fields
  • Performance: Select analysis mode and caching options
  • Output: Configure CSV format and included columns

πŸ“– See detailed configuration guide β†’

🎯 Basic Usage

Running Analysis

# Using Docker (recommended)
docker compose up

# Using npm locally
npm start

# Development mode with auto-reload
npm run dev

Understanding Output

The tool generates a CSV file with comprehensive analysis results:

model_name,published_month,license,github_url,github_stars,docker_support,video_input_support,is_lipsync_model,code_quality_score,confidence,reasoning
owner/repo-name,2025-01,MIT,https://github.com/owner/repo-name,150,yes,yes,yes,8,0.85,"Strong evidence of lip-sync implementation with PyTorch framework..."

Monitoring Progress

  • Logs: Check output/lipsync-investigation.log for detailed progress
  • Checkpoints: Automatic progress saving in output/checkpoint.json
  • Resume: Interrupted runs automatically resume from last checkpoint

πŸ“ Output Structure

output/
β”œβ”€β”€ lipsync_models_analysis.csv    # Main analysis results
β”œβ”€β”€ lipsync-investigation.log      # Detailed execution logs
└── checkpoint.json               # Progress checkpoint (auto-generated)

πŸ“š Documentation

πŸ”§ Development

Project Structure

src/
β”œβ”€β”€ services/          # Core analysis services
β”œβ”€β”€ types/            # TypeScript type definitions
β”œβ”€β”€ utils/            # Utility functions and helpers
└── index.ts          # Main application entry point

Available Scripts

npm run build         # Compile TypeScript to JavaScript
npm start            # Run the compiled application
npm run dev          # Run in development mode with auto-reload

🀝 Contributing

Contributions are welcome! Please see our contributing guidelines and feel free to submit pull requests or open issues for bugs and feature requests.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments


Need help? Check out our troubleshooting guide or open an issue.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published