Skip to content

Commit 312753b

Browse files
committed
fix(readme): update structure
1 parent 1a76496 commit 312753b

File tree

2 files changed

+533
-83
lines changed

2 files changed

+533
-83
lines changed

README.md

Lines changed: 32 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -4,46 +4,43 @@ Flare AI Kit template for Social AI Agents.
44

55
## 🚀 Key Features
66

7-
- **AI-Powered Social Response**: Automatically monitor and respond to mentions across Twitter/X and Telegram using Gemini AI
8-
- **Custom Model Fine-tuning**: Train personalized models using your own dataset or provided examples
9-
- **TEE Security Integration**: Run in Trusted Execution Environment for hardware-level security
10-
- **Multi-Platform Support**: Single interface to manage multiple social media platforms with rate limiting and retry mechanisms
7+
- **Secure AI Execution**
8+
Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security.
119

12-
## 🎯 Getting Started
10+
- **Built-in Chat UI**
11+
Interact with your AI via a TEE-served chat interface.
1312

14-
1. **Build with Docker**:
15-
```bash
16-
# Build the image
17-
docker build -t flare-ai-social .
18-
19-
# Run the container
20-
docker run -p 80:80 -it --env-file .env flare-ai-social
21-
```
13+
- **Gemini Fine-Tuning Support**
14+
Fine-tune foundational models with custom datasets.
2215

23-
2. **Access UI**: Navigate to `http://localhost:80`
16+
- **Social media integrations**
17+
X and Telegram integrations with with rate limiting and retry mechanisms.
2418

25-
## 🛠 Build Manually
19+
## 🎯 Getting Started
2620

27-
### Fine tuning a model over a dataset
21+
### Prerequisites
2822

29-
1. **Prepare Environment File**: Rename `.env.example` to `.env` and update these model fine-tuning parameters:
23+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
3024

31-
| Parameter | Description | Default |
32-
|-----------|-------------|---------|
33-
| `tuned_model_name` | Name of the newly tuned model | pugo-hilion |
34-
| `tuning_source_model` | Name of the foundational model to tune on | models/gemini-1.5-flash-001-tuning |
35-
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset | 30 |
36-
| `batch_size` | Number of examples to use in each training batch | 4 |
37-
| `learning_rate` | Step size multiplier for the gradient updates | 0.001 |
25+
### Fine-tune a model
3826

27+
1. **Prepare Environment File**: Rename `.env.example` to `.env` and update these model fine-tuning parameters:
28+
29+
| Parameter | Description | Default |
30+
| --------------------- | ------------------------------------------------------------------------- | ---------------------------------- |
31+
| `tuned_model_name` | Name of the newly tuned model | pugo-hilion |
32+
| `tuning_source_model` | Name of the foundational model to tune on | models/gemini-1.5-flash-001-tuning |
33+
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset | 30 |
34+
| `batch_size` | Number of examples to use in each training batch | 4 |
35+
| `learning_rate` | Step size multiplier for the gradient updates | 0.001 |
3936

4037
2. **Prepare a dataset:**
4138
An example dataset is provided in `src/data/training_data.json`, which consists of tweets from
4239
[Hugo Philion's X](https://x.com/HugoPhilion) account. You can use any publicly available dataset
4340
for model fine-tuning.
4441

45-
3. **Tune a new model**
46-
Set the name of the new tuned model in `src/flare_ai_social/tune_model.py`, then:
42+
3. **Tune a new model:**
43+
Depending on the size of your dataset, this process can take several minutes:
4744

4845
```bash
4946
uv run start-tuning
@@ -55,28 +52,28 @@ Flare AI Kit template for Social AI Agents.
5552

5653
![pugo-hilion_mean_loss](https://github.com/user-attachments/assets/f6c4d82b-678a-4ae5-bfb7-39dc59e1103d)
5754

58-
5. **Test the new model**
55+
5. **Test the new model:**
5956
Select the new tuned model and compare it against a set of prompting techniques (zero-shot, few-shot and chain-of-thought):
6057

6158
```bash
6259
uv run start-compare
6360
```
6461

65-
6. **Start Social Bots**:
62+
6. **Start Social Bots (optional):**:
63+
6664
- Set up Twitter/X API credentials
6765
- Configure Telegram bot token
6866
- Enable/disable platforms as needed
67+
6968
```bash
7069
uv run start-bots
7170
```
7271

73-
### Build using Docker (Recommended)
74-
75-
**Note:** You can only perform this step once you have finishing training a new model.
72+
### Interact with model
7673

7774
The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.
7875

79-
1. **Build the Docker Image:**
76+
1. **Build the Docker image**:
8077

8178
```bash
8279
docker build -t flare-ai-social .
@@ -89,11 +86,11 @@ The Docker setup mimics a TEE environment and includes an Nginx server for routi
8986
```
9087

9188
3. **Access the Frontend:**
92-
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the Chat UI.
89+
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the tuned model via the Chat UI.
9390

9491
## 📁 Repo Structure
9592

96-
```
93+
```plaintext
9794
src/flare_ai_social/
9895
├── ai/ # AI Provider implementations
9996
│ ├── base.py # Base AI provider abstraction
@@ -226,4 +223,4 @@ If you encounter issues, follow these steps:
226223

227224
## 💡 Next Steps
228225

229-
TODO
226+
TODO

0 commit comments

Comments
 (0)