ClipAI is a simple but powerful utility to connect your clipboard π directly to a Local LLM π€ (Ollama-based) such as Gemma 3, Phi 4, Deepseek-V3, Qwen, Llama 3.x, etc. It is a clipboard viewer and text transformer application built using Python.
It is your daily companion for any writing-related job βοΈπ. Easy peasy.
The PC requirements π₯οΈ are mostly related to your preferred LLM, but consider that Gemma 3-1B-it-qat-q4_0 runs smoothly on a potato PC π₯ (8GB RAM and internal GPU).
- Real-time Clipboard Monitoring: Automatically detect and display clipboard content changes
- Text Transformation: Transform clipboard content using various AI-powered operations
- Markdown Support: output text can be view using markdown styling
- Multiple LLM Models: Support for different language models through Ollama
- Customizable Prompts: Configure different transformation types through prompts.json
- Auto-refresh: Toggle automatic clipboard monitoring
- Format Toggle: Switch between plain text and Markdown formatted views
- Copy Output: Easily copy transformed content back to clipboard
- Features
- Screenshots
- Requirements
- Usage
- Installation/Running
- UI Elements
- Configuration
- Adding New Prompts
- Building Executables
- Project Structure
- Contributing
- License
- Python 3.8+
- Ollama running (see https://ollama.com/ to install Ollama)
- Required Python packages:
- pyperclip
- requests
- pyinstaller (only to build the project by yourself, see below how to do it)
- Copy any text to your clipboard
- Select desired transformation type
- Choose LLM model
- Click Send button or press Shift+Enter
- View transformed output
- Use right-click to toggle formatting
- Copy output using the copy button
- Clone the repository
- Create a virtual environment:
python -m venv .venv source .venv/bin/activate # Linux/Mac .venv\Scripts\activate # Windows
- Install dependencies:
pip install -r requirements.txt
- Configure Ollama and ensure it's running
- Run the application:
python run.py
- Download the zip file from last Release
- Extract the files
- Configure Ollama and ensure it's running
- Run the .exe
- Follow the instruction here
-
- Manually updates the clipboard content
- Useful when auto-refresh is disabled
-
- Clears both input and output text areas
- Resets the current state
-
- Copies the output content to clipboard
- Only active when there is content to copy
-
Transformation Type
- Selects the type of transformation to apply
- Options are loaded from prompts.json
- Examples might include:
- Summarization
- Translation
- Code explanation
- Text formatting
- Custom transformations
-
Model Selection
- Chooses the LLM model to use
- Automatically populated from available Ollama models
- Default model is set in config.json
-
Input Area
- Displays current clipboard content
- Updates automatically when auto-refresh is enabled
- Supports manual updates via refresh button
-
Output Area
- Shows transformed content
- Supports markdown formatting
- Right-click to toggle between formatted and plain text views
- Shows current operation status
- Displays error messages
- Indicates clipboard updates
- Shows LLM operation progress
{
"OLLAMA_URL": "http://localhost:11434/api/",
"DEFAULT_MODEL": "aya-expanse:latest"
}
Contains transformation templates for different operations. Each template can include:
- Description
- System prompt
- User prompt template
- Example inputs/outputs
The prompts.json
is used to store the different prompts that can be applied to the clipboard content. Each key in the dictionary represents the prompt name, and the corresponding value is the prompt template.
- Open the
prompts.json
file in your preferred code editor. - Add a new key-value pair to the dictionary, where the key is the name of the prompt, and the value is the prompt template.
- Run
python run.py
To add a new prompt for rewriting the text in short sentences, you can modify the prompts.json
file as follows:
{
"Chat Mode": "\"{}\"",
"Rephrase": "Please rephrase the following text while keeping the original meaning without any preamble: \"{}\"",
"Translate in English": "Please translate the following text into English without any preamble: \"{}\"",
"Summarize": "Please summarize the following text: \"{}\"",
"Rephrase in short sentences": "Please rephrase the following text in short sentences while keeping the original meaning without any preamble: \"{}\""
}
To build standalone executables for Windows, Linux, or macOS:
-
Install the required dependencies:
pip install -r requirements.txt
-
Run the build script:
python build.py
The executables will be created in the dist
directory, organized by platform:
- Windows:
dist/windows/ClipAI.exe
- Linux:
dist/linux/ClipAI
- macOS:
dist/darwin/ClipAI
Each platform directory contains:
- The executable file
- Required configuration files (config.json, prompts.json)
- Images directory with all icons
- README.md
Note: To build for a specific platform, you need to run the build script on that platform. Cross-platform building is not supported.
run.py
build.py
config.json
prompts.json
src/
βββ __init__.py
βββ main.py
βββ core/
β βββ __init__.py
β βββ config.py
β βββ error_handler.py
β βββ llm_client.py
β βββ markdown_parser.py
βββ ui/
βββ __init__.py
βββ clipboard_viewer.py
βββ components.py
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.