Skip to content

Commit 819523b

Browse files
authored
feat(social): add twitter and telegram integration (#3)
2 parents 2268ff0 + 312753b commit 819523b

File tree

12 files changed

+1829
-80
lines changed

12 files changed

+1829
-80
lines changed

.env.example

Lines changed: 21 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,26 @@
1-
# Local variables
2-
GEMINI_API_KEY=YOUR_GEMINI_API_KEY
1+
# Google Gemini AI
2+
GEMINI_API_KEY=your_gemini_api_key
33
TUNED_MODEL_NAME=pugo-hilion
44

5-
# Tuning parameters
5+
# Twitter/X Bot Settings
6+
ENABLE_TWITTER=true
7+
X_API_KEY=your_twitter_api_key
8+
X_API_KEY_SECRET=your_twitter_api_secret
9+
X_ACCESS_TOKEN=your_twitter_access_token
10+
X_ACCESS_TOKEN_SECRET=your_twitter_access_token_secret
11+
RAPIDAPI_KEY=your_rapidapi_key # get an api key from https://rapidapi.com/davethebeast/api/twitter241
12+
RAPIDAPI_HOST=twitter241.p.rapidapi.com
13+
TWITTER_ACCOUNTS_TO_MONITOR=@YourAccount,@AnotherAccount,keyword
14+
TWITTER_POLLING_INTERVAL=60
15+
16+
# Telegram Bot Settings
17+
ENABLE_TELEGRAM=true
18+
TELEGRAM_API_TOKEN=your_telegram_bot_token
19+
TELEGRAM_ALLOWED_USERS=2323233,32234 # empty to allow all accounts to interact with the bot
20+
TELEGRAM_POLLING_INTERVAL=5
21+
22+
# Fine-tuning Parameters
623
TUNING_SOURCE_MODEL=models/gemini-1.5-flash-001-tuning
724
TUNING_EPOCH_COUNT=100
825
TUNING_BATCH_SIZE=4
9-
TUNING_LEARNING_RATE=0.001
10-
11-
# For TEE deployment only
12-
TEE_IMAGE_REFERENCE=ghcr.io/YOUR_REPO_IMAGE:main
13-
INSTANCE_NAME=PROJECT_NAME-TEAM-_NAME
14-
15-
# X API (optional)
16-
X_API_KEY=YOUR_X_API_KEY
17-
X_API_KEY_SECRET=YOUR_X_API_KEY_SECRET
18-
X_BEARER_TOKEN=YOUR_X_BEARER_TOKEN
19-
X_ACCESS_TOKEN=YOUR_X_ACCESS_TOKEN
20-
X_ACCESS_TOKEN_SECRET=YOUR_X_ACCESS_TOKEN_SECRET
26+
TUNING_LEARNING_RATE=0.001

README.md

Lines changed: 74 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -2,54 +2,78 @@
22

33
Flare AI Kit template for Social AI Agents.
44

5-
## 🏗️ Build & Run Instructions
5+
## 🚀 Key Features
66

7-
**Prepare the Environment File:**
8-
Rename `.env.example` to `.env` and update the variables accordingly.
9-
Some parameters are specific to model fine-tuning:
7+
- **Secure AI Execution**
8+
Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security.
109

11-
| Parameter | Description | Default |
12-
| --------------------- | -------------------------------------------------------------------------- | ------------------------------------ |
13-
| `tuned_model_name` | Name of the newly tuned model. | `pugo-hilion` |
14-
| `tuning_source_model` | Name of the foundational model to tune on. | `models/gemini-1.5-flash-001-tuning` |
15-
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset. | `30` |
16-
| `batch_size` | Number of examples to use in each training batch. | `4` |
17-
| `learning_rate` | Step size multiplier for the gradient updates. | `0.001` |
10+
- **Built-in Chat UI**
11+
Interact with your AI via a TEE-served chat interface.
1812

19-
### Fine tuning a model over a dataset
13+
- **Gemini Fine-Tuning Support**
14+
Fine-tune foundational models with custom datasets.
2015

21-
1. **Prepare a dataset:**
16+
- **Social media integrations**
17+
X and Telegram integrations with with rate limiting and retry mechanisms.
18+
19+
## 🎯 Getting Started
20+
21+
### Prerequisites
22+
23+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
24+
25+
### Fine-tune a model
26+
27+
1. **Prepare Environment File**: Rename `.env.example` to `.env` and update these model fine-tuning parameters:
28+
29+
| Parameter | Description | Default |
30+
| --------------------- | ------------------------------------------------------------------------- | ---------------------------------- |
31+
| `tuned_model_name` | Name of the newly tuned model | pugo-hilion |
32+
| `tuning_source_model` | Name of the foundational model to tune on | models/gemini-1.5-flash-001-tuning |
33+
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset | 30 |
34+
| `batch_size` | Number of examples to use in each training batch | 4 |
35+
| `learning_rate` | Step size multiplier for the gradient updates | 0.001 |
36+
37+
2. **Prepare a dataset:**
2238
An example dataset is provided in `src/data/training_data.json`, which consists of tweets from
2339
[Hugo Philion's X](https://x.com/HugoPhilion) account. You can use any publicly available dataset
2440
for model fine-tuning.
2541

26-
2. **Tune a new model**
27-
Set the name of the new tuned model in `src/flare_ai_social/tune_model.py`, then:
42+
3. **Tune a new model:**
43+
Depending on the size of your dataset, this process can take several minutes:
2844

2945
```bash
3046
uv run start-tuning
3147
```
3248

33-
3. **Observe loss parameters:**
49+
4. **Observe loss parameters:**
3450
After tuning in complete, a training loss PNG will be saved in the root folder corresponding to the new model.
3551
Ideally the loss should minimize to near 0 after several training epochs.
3652

3753
![pugo-hilion_mean_loss](https://github.com/user-attachments/assets/f6c4d82b-678a-4ae5-bfb7-39dc59e1103d)
3854

39-
4. **Test the new model**
55+
5. **Test the new model:**
4056
Select the new tuned model and compare it against a set of prompting techniques (zero-shot, few-shot and chain-of-thought):
4157

4258
```bash
4359
uv run start-compare
4460
```
4561

46-
### Build using Docker (Recommended)
62+
6. **Start Social Bots (optional):**:
63+
64+
- Set up Twitter/X API credentials
65+
- Configure Telegram bot token
66+
- Enable/disable platforms as needed
4767

48-
**Note:** You can only perform this step once you have finishing training a new model.
68+
```bash
69+
uv run start-bots
70+
```
71+
72+
### Interact with model
4973

5074
The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.
5175

52-
1. **Build the Docker Image:**
76+
1. **Build the Docker image**:
5377

5478
```bash
5579
docker build -t flare-ai-social .
@@ -62,7 +86,32 @@ The Docker setup mimics a TEE environment and includes an Nginx server for routi
6286
```
6387

6488
3. **Access the Frontend:**
65-
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the Chat UI.
89+
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the tuned model via the Chat UI.
90+
91+
## 📁 Repo Structure
92+
93+
```plaintext
94+
src/flare_ai_social/
95+
├── ai/ # AI Provider implementations
96+
│ ├── base.py # Base AI provider abstraction
97+
│ ├── gemini.py # Google Gemini integration
98+
│ └── openrouter.py # OpenRouter integration
99+
├── api/ # API layer
100+
│ └── routes/ # API endpoint definitions
101+
├── attestation/ # TEE attestation implementation
102+
│ ├── vtpm_attestation.py # vTPM client
103+
│ └── vtpm_validation.py # Token validation
104+
├── prompts/ # Prompt engineering templates
105+
│ └── templates.py # Different prompt strategies
106+
├── telegram/ # Telegram bot implementation
107+
│ └── service.py # Telegram service logic
108+
├── twitter/ # Twitter bot implementation
109+
│ └── service.py # Twitter service logic
110+
├── bot_manager.py # Bot orchestration
111+
├── main.py # FastAPI application
112+
├── settings.py # Configuration settings
113+
└── tune_model.py # Model fine-tuning utilities
114+
```
66115

67116
## 🚀 Deploy on TEE
68117

@@ -171,3 +220,7 @@ If you encounter issues, follow these steps:
171220

172221
3. **Check Firewall Settings:**
173222
Confirm that your instance is publicly accessible on port `80`.
223+
224+
## 💡 Next Steps
225+
226+
TODO

pyproject.toml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,18 @@ dependencies = [
1919
"structlog>=25.1.0",
2020
"tweepy>=4.15.0",
2121
"uvicorn>=0.34.0",
22+
"aiohttp>=3.9.0",
23+
"python-dotenv>=1.0.0",
24+
"python-telegram-bot>=20.7",
2225
]
2326

2427
[project.scripts]
2528
start-compare = "flare_ai_social.compare:start"
2629
start-tuning = "flare_ai_social.tune_model:start"
2730
start-backend = "flare_ai_social.main:start"
31+
start-twitter = "flare_ai_social.twitter:start"
32+
start-telegram = "flare_ai_social.telegram:start"
33+
start-bots = "flare_ai_social.bot_manager:start_bot_manager"
2834

2935
[build-system]
3036
requires = ["hatchling"]
@@ -87,4 +93,4 @@ reportUnusedExpression = true
8793
reportUnnecessaryTypeIgnoreComment = true
8894
reportMatchNotExhaustive = true
8995
reportImplicitOverride = true
90-
reportShadowedImports = true
96+
reportShadowedImports = true

src/flare_ai_social/__init__.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,6 @@
11
from flare_ai_social.ai import GeminiProvider
22
from flare_ai_social.api import ChatRouter, router
33
from flare_ai_social.attestation import Vtpm
4+
from flare_ai_social.bot_manager import start_bot_manager
45

5-
__all__ = [
6-
"ChatRouter",
7-
"GeminiProvider",
8-
"Vtpm",
9-
"router",
10-
]
6+
__all__ = ["ChatRouter", "GeminiProvider", "Vtpm", "router", "start_bot_manager"]

0 commit comments

Comments
 (0)