Skip to content

Commit d5b62a3

Browse files
committed
feat: readme based on the structure
1 parent e5761e7 commit d5b62a3

File tree

1 file changed

+71
-15
lines changed

1 file changed

+71
-15
lines changed

README.md

Lines changed: 71 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2,47 +2,74 @@
22

33
Flare AI Kit template for Social AI Agents.
44

5-
## 🏗️ Build & Run Instructions
5+
## 🚀 Key Features
66

7-
**Prepare the Environment File:**
8-
Rename `.env.example` to `.env` and update the variables accordingly.
9-
Some parameters are specific to model fine-tuning:
7+
- **AI-Powered Social Response**: Automatically monitor and respond to mentions across Twitter/X and Telegram using Gemini AI
8+
- **Custom Model Fine-tuning**: Train personalized models using your own dataset or provided examples
9+
- **TEE Security Integration**: Run in Trusted Execution Environment for hardware-level security
10+
- **Multi-Platform Support**: Single interface to manage multiple social media platforms with rate limiting and retry mechanisms
1011

11-
| Parameter | Description | Default |
12-
| --------------------- | -------------------------------------------------------------------------- | ------------------------------------ |
13-
| `tuned_model_name` | Name of the newly tuned model. | `pugo-hilion` |
14-
| `tuning_source_model` | Name of the foundational model to tune on. | `models/gemini-1.5-flash-001-tuning` |
15-
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset. | `30` |
16-
| `batch_size` | Number of examples to use in each training batch. | `4` |
17-
| `learning_rate` | Step size multiplier for the gradient updates. | `0.001` |
12+
## 🎯 Getting Started
13+
14+
1. **Build with Docker**:
15+
```bash
16+
# Build the image
17+
docker build -t flare-ai-social .
18+
19+
# Run the container
20+
docker run -p 80:80 -it --env-file .env flare-ai-social
21+
```
22+
23+
2. **Access UI**: Navigate to `http://localhost:80`
24+
25+
## 🛠 Build Manually
1826

1927
### Fine tuning a model over a dataset
2028

21-
1. **Prepare a dataset:**
29+
1. **Prepare Environment File**: Rename `.env.example` to `.env` and update these model fine-tuning parameters:
30+
31+
| Parameter | Description | Default |
32+
|-----------|-------------|---------|
33+
| `tuned_model_name` | Name of the newly tuned model | pugo-hilion |
34+
| `tuning_source_model` | Name of the foundational model to tune on | models/gemini-1.5-flash-001-tuning |
35+
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset | 30 |
36+
| `batch_size` | Number of examples to use in each training batch | 4 |
37+
| `learning_rate` | Step size multiplier for the gradient updates | 0.001 |
38+
39+
40+
2. **Prepare a dataset:**
2241
An example dataset is provided in `src/data/training_data.json`, which consists of tweets from
2342
[Hugo Philion's X](https://x.com/HugoPhilion) account. You can use any publicly available dataset
2443
for model fine-tuning.
2544

26-
2. **Tune a new model**
45+
3. **Tune a new model**
2746
Set the name of the new tuned model in `src/flare_ai_social/tune_model.py`, then:
2847

2948
```bash
3049
uv run start-tuning
3150
```
3251

33-
3. **Observe loss parameters:**
52+
4. **Observe loss parameters:**
3453
After tuning in complete, a training loss PNG will be saved in the root folder corresponding to the new model.
3554
Ideally the loss should minimize to near 0 after several training epochs.
3655

3756
![pugo-hilion_mean_loss](https://github.com/user-attachments/assets/f6c4d82b-678a-4ae5-bfb7-39dc59e1103d)
3857

39-
4. **Test the new model**
58+
5. **Test the new model**
4059
Select the new tuned model and compare it against a set of prompting techniques (zero-shot, few-shot and chain-of-thought):
4160

4261
```bash
4362
uv run start-compare
4463
```
4564

65+
6. **Start Social Bots**:
66+
- Set up Twitter/X API credentials
67+
- Configure Telegram bot token
68+
- Enable/disable platforms as needed
69+
```bash
70+
uv run start-bots
71+
```
72+
4673
### Build using Docker (Recommended)
4774

4875
**Note:** You can only perform this step once you have finishing training a new model.
@@ -64,6 +91,31 @@ The Docker setup mimics a TEE environment and includes an Nginx server for routi
6491
3. **Access the Frontend:**
6592
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the Chat UI.
6693

94+
## 📁 Repo Structure
95+
96+
```
97+
src/flare_ai_social/
98+
├── ai/ # AI Provider implementations
99+
│ ├── base.py # Base AI provider abstraction
100+
│ ├── gemini.py # Google Gemini integration
101+
│ └── openrouter.py # OpenRouter integration
102+
├── api/ # API layer
103+
│ └── routes/ # API endpoint definitions
104+
├── attestation/ # TEE attestation implementation
105+
│ ├── vtpm_attestation.py # vTPM client
106+
│ └── vtpm_validation.py # Token validation
107+
├── prompts/ # Prompt engineering templates
108+
│ └── templates.py # Different prompt strategies
109+
├── telegram/ # Telegram bot implementation
110+
│ └── service.py # Telegram service logic
111+
├── twitter/ # Twitter bot implementation
112+
│ └── service.py # Twitter service logic
113+
├── bot_manager.py # Bot orchestration
114+
├── main.py # FastAPI application
115+
├── settings.py # Configuration settings
116+
└── tune_model.py # Model fine-tuning utilities
117+
```
118+
67119
## 🚀 Deploy on TEE
68120

69121
Deploy on a [Confidential Space](https://cloud.google.com/confidential-computing/confidential-space/docs/confidential-space-overview) using AMD SEV.
@@ -171,3 +223,7 @@ If you encounter issues, follow these steps:
171223

172224
3. **Check Firewall Settings:**
173225
Confirm that your instance is publicly accessible on port `80`.
226+
227+
## 💡 Next Steps
228+
229+
TODO

0 commit comments

Comments
 (0)