LLM-powered FFmpeg command assistant that converts natural language instructions into ffmpeg commands.
Repository: https://github.com/imortaltatsu/llmpeg
PyPI Package: https://pypi.org/project/llmpeg/
- Conversational interface that converts natural language instructions into
ffmpegcommands - GPT-compatible API support (OpenRouter, Ollama, or custom endpoints)
- Interactive file selection when requests are ambiguous
- Automatic command validation and error correction
- Support for compression, conversion, and cropping operations
Using pip:
pip install llmpegUsing pipx (isolated installation, recommended for CLI tools):
pipx install llmpegUsing uv:
uv pip install llmpegUsing pip:
pip install git+https://github.com/imortaltatsu/llmpeg.gitUsing pipx:
pipx install git+https://github.com/imortaltatsu/llmpeg.gitUsing uv:
uv pip install git+https://github.com/imortaltatsu/llmpeg.gitUsing uvx (run without installing):
uvx git+https://github.com/imortaltatsu/llmpeg.git# Clone the repository
git clone https://github.com/imortaltatsu/llmpeg.git
cd llmpeg
# Install in development mode
uv sync
# or
pip install -e .After installation, configure your API settings:
# Interactive setup
llmpeg setup init
# Or use environment variables
export LLMPEG_PROVIDER=openrouter
export LLMPEG_API_KEY=your-api-key
export LLMPEG_MODEL_NAME=gpt-oss:20b# Single command
llmpeg -p "convert sample.png to jpg"
# Interactive mode
llmpeg
# Setup commands
llmpeg setup init # Configure API settings
llmpeg setup show # Show current configuration
llmpeg setup test # Test API connectionConfiguration is stored in ~/.llmpeg/config.py or ~/.llmpeg/config.json.
Supported providers:
- OpenRouter: Use
gpt-oss:20bor other models - Ollama: Local Ollama instance (defaults to
gpt-oss:20b) - Custom: Any OpenAI-compatible API endpoint
- Python 3.12+
- FFmpeg installed and available in PATH
- API access (OpenRouter API key, Ollama running locally, or custom endpoint)