Skip to content

imortaltatsu/llmpeg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llmpeg

LLM-powered FFmpeg command assistant that converts natural language instructions into ffmpeg commands.

Repository: https://github.com/imortaltatsu/llmpeg
PyPI Package: https://pypi.org/project/llmpeg/

Features

  • Conversational interface that converts natural language instructions into ffmpeg commands
  • GPT-compatible API support (OpenRouter, Ollama, or custom endpoints)
  • Interactive file selection when requests are ambiguous
  • Automatic command validation and error correction
  • Support for compression, conversion, and cropping operations

Installation

From PyPI (Recommended)

Using pip:

pip install llmpeg

Using pipx (isolated installation, recommended for CLI tools):

pipx install llmpeg

Using uv:

uv pip install llmpeg

From Git Repository

Using pip:

pip install git+https://github.com/imortaltatsu/llmpeg.git

Using pipx:

pipx install git+https://github.com/imortaltatsu/llmpeg.git

Using uv:

uv pip install git+https://github.com/imortaltatsu/llmpeg.git

Using uvx (run without installing):

uvx git+https://github.com/imortaltatsu/llmpeg.git

Local Development

# Clone the repository
git clone https://github.com/imortaltatsu/llmpeg.git
cd llmpeg

# Install in development mode
uv sync
# or
pip install -e .

Setup

After installation, configure your API settings:

# Interactive setup
llmpeg setup init

# Or use environment variables
export LLMPEG_PROVIDER=openrouter
export LLMPEG_API_KEY=your-api-key
export LLMPEG_MODEL_NAME=gpt-oss:20b

Usage

# Single command
llmpeg -p "convert sample.png to jpg"

# Interactive mode
llmpeg

# Setup commands
llmpeg setup init    # Configure API settings
llmpeg setup show    # Show current configuration
llmpeg setup test    # Test API connection

Configuration

Configuration is stored in ~/.llmpeg/config.py or ~/.llmpeg/config.json.

Supported providers:

  • OpenRouter: Use gpt-oss:20b or other models
  • Ollama: Local Ollama instance (defaults to gpt-oss:20b)
  • Custom: Any OpenAI-compatible API endpoint

Requirements

  • Python 3.12+
  • FFmpeg installed and available in PATH
  • API access (OpenRouter API key, Ollama running locally, or custom endpoint)

About

a ffmpeg based llm cli with support for ollama,openrouter,gpt and any other open ai compatible api

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages