Skip to content

Commit 691d088

Browse files
committed
README: Add comparison table, badges, and improved structure for launch
1 parent 9b7440c commit 691d088

1 file changed

Lines changed: 99 additions & 18 deletions

File tree

README.md

Lines changed: 99 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,58 @@
1-
# Locally Uncensored
1+
<div align="center">
22

3-
**Private, local AI chat & media generation. No cloud. No censorship. No data collection.**
3+
# 🔓 Locally Uncensored
44

5-
A beautiful, feature-rich web app for running uncensored AI models entirely on your own hardware. Chat with abliterated LLMs via Ollama and generate images/videos with ComfyUI — all offline, all private.
5+
**The only local AI app that does Chat + Images + Video — all in one beautiful UI.**
6+
7+
No cloud. No censorship. No data collection. Just you and your AI.
8+
9+
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
10+
[![GitHub stars](https://img.shields.io/github/stars/PurpleDoubleD/locally-uncensored?style=social)](https://github.com/PurpleDoubleD/locally-uncensored/stargazers)
11+
[![GitHub last commit](https://img.shields.io/github/last-commit/PurpleDoubleD/locally-uncensored)](https://github.com/PurpleDoubleD/locally-uncensored/commits)
12+
13+
<!-- Add screenshot/GIF here -->
14+
<!-- ![Locally Uncensored Screenshot](docs/screenshot.png) -->
15+
16+
[Getting Started](#-quick-start) · [Features](#-features) · [Why This App?](#-why-locally-uncensored) · [Contributing](#contributing)
17+
18+
</div>
619

720
---
821

9-
## Features
22+
## ❓ Why Locally Uncensored?
23+
24+
Tired of switching between Ollama for chat, ComfyUI for images, and another tool for video? Frustrated with bloated UIs that need Docker and a PhD to set up?
25+
26+
**Locally Uncensored** is the all-in-one solution. One app. One setup. Everything local.
27+
28+
### How it compares
29+
30+
| Feature | Locally Uncensored | Open WebUI | LM Studio | SillyTavern |
31+
|---------|:-:|:-:|:-:|:-:|
32+
| AI Chat |||||
33+
| Image Generation |||||
34+
| Video Generation |||||
35+
| Uncensored by Default |||| ⚠️ |
36+
| One-Click Setup || ❌ (Docker) || ❌ (Node.js) |
37+
| 25+ Built-in Personas |||| ⚠️ (manual) |
38+
| Modern UI |||||
39+
| Open Source |||||
40+
| No Docker Required |||||
41+
| 100% Offline |||||
1042

11-
- **Uncensored AI Chat** — Run abliterated models locally with full control
12-
- **25+ Personas** — From Helpful Assistant to Roast Master, choose your AI personality
43+
---
44+
45+
## ✨ Features
46+
47+
- **Uncensored AI Chat** — Run abliterated models locally with zero restrictions
1348
- **Image Generation** — Text-to-image via ComfyUI with full parameter control
14-
- **Video Generation** — Text-to-video support with frame and FPS settings
15-
- **Thinking Display** — See the AI's reasoning process in collapsible blocks
16-
- **Model Manager** — Install, manage, and switch between models with one click
17-
- **Discover Models** — Browse and install uncensored models from the Ollama registry
18-
- **Dark/Light Mode** — Beautiful UI with glassmorphism design
19-
- **100% Local** — Everything runs on your machine, nothing leaves your network
49+
- **Video Generation** — Text-to-video with Wan 2.1/2.2 and AnimateDiff support
50+
- **25+ Personas** — From Helpful Assistant to Roast Master, ready out of the box
51+
- **Model Manager** — Browse, install, and switch models with one click
52+
- **Discover Models** — Find and install uncensored models from the Ollama registry
53+
- **Thinking Display** — See the AI's reasoning in collapsible blocks
54+
- **Dark/Light Mode** — Beautiful glassmorphism UI that actually looks good
55+
- **100% Local** — Everything runs on your machine, nothing touches the internet
2056
- **Conversation History** — All chats saved locally in your browser
2157

2258
## Tech Stack
@@ -26,7 +62,9 @@ A beautiful, feature-rich web app for running uncensored AI models entirely on y
2662
- **AI Backend**: Ollama (text), ComfyUI (images/video)
2763
- **Build**: Vite 8
2864

29-
## Quick Start
65+
---
66+
67+
## 🚀 Quick Start
3068

3169
### One-Command Setup (Windows)
3270

@@ -68,12 +106,15 @@ Open **http://localhost:5173** — the app recommends models on first launch.
68106

69107
### One-Click Start (Windows)
70108

71-
Use `start.bat` to launch everything together:
72109
```batch
73110
start.bat
74111
```
75112

76-
## Model Auto-Detection
113+
Launches Ollama + ComfyUI + the app in one go.
114+
115+
---
116+
117+
## 🧠 Model Auto-Detection
77118

78119
The app automatically detects all installed models across all backends — no manual configuration needed:
79120

@@ -83,7 +124,37 @@ The app automatically detects all installed models across all backends — no ma
83124

84125
Just install models in the standard locations and the app picks them up.
85126

86-
## Configuration
127+
## 🎭 Recommended Models
128+
129+
### Text (Ollama)
130+
131+
| Model | Size | VRAM | Best For |
132+
|-------|------|------|----------|
133+
| Llama 3.1 8B Abliterated | 5.7 GB | 6 GB | Fast all-rounder |
134+
| Qwen3 8B Abliterated | 5.2 GB | 6 GB | Coding |
135+
| Mistral Nemo 12B Abliterated | 6.8 GB | 8 GB | Multilingual |
136+
| DeepSeek R1 8B Abliterated | 5 GB | 6 GB | Reasoning |
137+
| Qwen3 14B Abliterated | 9 GB | 12 GB | High intelligence |
138+
139+
### Image (ComfyUI)
140+
141+
| Model | VRAM | Notes |
142+
|-------|------|-------|
143+
| Juggernaut XL V9 | 8 GB | Best photorealistic |
144+
| FLUX.1 Schnell | 10-12 GB | State-of-the-art |
145+
| Pony Diffusion V6 XL | 8 GB | Anime/stylized |
146+
147+
### Video (ComfyUI)
148+
149+
| Model | VRAM | Output | Notes |
150+
|-------|------|--------|-------|
151+
| Wan 2.1 T2V 1.3B | 8-10 GB | 480p WEBP | Built-in nodes, no extras needed |
152+
| Wan 2.2 T2V 14B (FP8) | 10-12 GB | 480-720p | Higher quality, quantized |
153+
| AnimateDiff v3 + SD1.5 | 6-8 GB | MP4 | Requires AnimateDiff custom nodes |
154+
155+
---
156+
157+
## ⚙️ Configuration
87158

88159
### Environment Variables
89160

@@ -101,7 +172,9 @@ COMFYUI_PATH=C:\path\to\your\ComfyUI
101172
- **Max Tokens** — Limit response length (0 = unlimited)
102173
- **Theme** — Dark or Light mode
103174

104-
## Project Structure
175+
---
176+
177+
## 📁 Project Structure
105178

106179
```
107180
src/
@@ -119,6 +192,8 @@ src/
119192
lib/ # Constants & utilities
120193
```
121194

195+
---
196+
122197
## Contributing
123198

124199
Contributions are welcome! Feel free to open issues and pull requests.
@@ -135,4 +210,10 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
135210

136211
---
137212

138-
**Built with privacy in mind. Your data stays on your machine.**
213+
<div align="center">
214+
215+
**Built with privacy in mind. Your data stays on your machine.** 🔒
216+
217+
If you find this useful, consider giving it a ⭐
218+
219+
</div>

0 commit comments

Comments
 (0)