Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 21 additions & 15 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,20 +1,26 @@
# Local variables
GEMINI_API_KEY=YOUR_GEMINI_API_KEY
# Google Gemini AI
GEMINI_API_KEY=your_gemini_api_key
TUNED_MODEL_NAME=pugo-hilion

# Tuning parameters
# Twitter/X Bot Settings
ENABLE_TWITTER=true
X_API_KEY=your_twitter_api_key
X_API_KEY_SECRET=your_twitter_api_secret
X_ACCESS_TOKEN=your_twitter_access_token
X_ACCESS_TOKEN_SECRET=your_twitter_access_token_secret
RAPIDAPI_KEY=your_rapidapi_key # get an api key from https://rapidapi.com/davethebeast/api/twitter241
RAPIDAPI_HOST=twitter241.p.rapidapi.com
TWITTER_ACCOUNTS_TO_MONITOR=@YourAccount,@AnotherAccount,keyword
TWITTER_POLLING_INTERVAL=60

# Telegram Bot Settings
ENABLE_TELEGRAM=true
TELEGRAM_API_TOKEN=your_telegram_bot_token
TELEGRAM_ALLOWED_USERS=2323233,32234 # empty to allow all accounts to interact with the bot
TELEGRAM_POLLING_INTERVAL=5

# Fine-tuning Parameters
TUNING_SOURCE_MODEL=models/gemini-1.5-flash-001-tuning
TUNING_EPOCH_COUNT=100
TUNING_BATCH_SIZE=4
TUNING_LEARNING_RATE=0.001

# For TEE deployment only
TEE_IMAGE_REFERENCE=ghcr.io/YOUR_REPO_IMAGE:main
INSTANCE_NAME=PROJECT_NAME-TEAM-_NAME

# X API (optional)
X_API_KEY=YOUR_X_API_KEY
X_API_KEY_SECRET=YOUR_X_API_KEY_SECRET
X_BEARER_TOKEN=YOUR_X_BEARER_TOKEN
X_ACCESS_TOKEN=YOUR_X_ACCESS_TOKEN
X_ACCESS_TOKEN_SECRET=YOUR_X_ACCESS_TOKEN_SECRET
TUNING_LEARNING_RATE=0.001
95 changes: 74 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,54 +2,78 @@

Flare AI Kit template for Social AI Agents.

## 🏗️ Build & Run Instructions
## 🚀 Key Features

**Prepare the Environment File:**
Rename `.env.example` to `.env` and update the variables accordingly.
Some parameters are specific to model fine-tuning:
- **Secure AI Execution**
Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security.

| Parameter | Description | Default |
| --------------------- | -------------------------------------------------------------------------- | ------------------------------------ |
| `tuned_model_name` | Name of the newly tuned model. | `pugo-hilion` |
| `tuning_source_model` | Name of the foundational model to tune on. | `models/gemini-1.5-flash-001-tuning` |
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset. | `30` |
| `batch_size` | Number of examples to use in each training batch. | `4` |
| `learning_rate` | Step size multiplier for the gradient updates. | `0.001` |
- **Built-in Chat UI**
Interact with your AI via a TEE-served chat interface.

### Fine tuning a model over a dataset
- **Gemini Fine-Tuning Support**
Fine-tune foundational models with custom datasets.

1. **Prepare a dataset:**
- **Social media integrations**
X and Telegram integrations with with rate limiting and retry mechanisms.

## 🎯 Getting Started

### Prerequisites

- [uv](https://docs.astral.sh/uv/getting-started/installation/)

### Fine-tune a model

1. **Prepare Environment File**: Rename `.env.example` to `.env` and update these model fine-tuning parameters:

| Parameter | Description | Default |
| --------------------- | ------------------------------------------------------------------------- | ---------------------------------- |
| `tuned_model_name` | Name of the newly tuned model | pugo-hilion |
| `tuning_source_model` | Name of the foundational model to tune on | models/gemini-1.5-flash-001-tuning |
| `epoch_count` | Number of tuning epochs to run. An epoch is a pass over the whole dataset | 30 |
| `batch_size` | Number of examples to use in each training batch | 4 |
| `learning_rate` | Step size multiplier for the gradient updates | 0.001 |

2. **Prepare a dataset:**
An example dataset is provided in `src/data/training_data.json`, which consists of tweets from
[Hugo Philion's X](https://x.com/HugoPhilion) account. You can use any publicly available dataset
for model fine-tuning.

2. **Tune a new model**
Set the name of the new tuned model in `src/flare_ai_social/tune_model.py`, then:
3. **Tune a new model:**
Depending on the size of your dataset, this process can take several minutes:

```bash
uv run start-tuning
```

3. **Observe loss parameters:**
4. **Observe loss parameters:**
After tuning in complete, a training loss PNG will be saved in the root folder corresponding to the new model.
Ideally the loss should minimize to near 0 after several training epochs.

![pugo-hilion_mean_loss](https://github.com/user-attachments/assets/f6c4d82b-678a-4ae5-bfb7-39dc59e1103d)

4. **Test the new model**
5. **Test the new model:**
Select the new tuned model and compare it against a set of prompting techniques (zero-shot, few-shot and chain-of-thought):

```bash
uv run start-compare
```

### Build using Docker (Recommended)
6. **Start Social Bots (optional):**:

- Set up Twitter/X API credentials
- Configure Telegram bot token
- Enable/disable platforms as needed

**Note:** You can only perform this step once you have finishing training a new model.
```bash
uv run start-bots
```

### Interact with model

The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.

1. **Build the Docker Image:**
1. **Build the Docker image**:

```bash
docker build -t flare-ai-social .
Expand All @@ -62,7 +86,32 @@ The Docker setup mimics a TEE environment and includes an Nginx server for routi
```

3. **Access the Frontend:**
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the Chat UI.
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the tuned model via the Chat UI.

## 📁 Repo Structure

```plaintext
src/flare_ai_social/
├── ai/ # AI Provider implementations
│ ├── base.py # Base AI provider abstraction
│ ├── gemini.py # Google Gemini integration
│ └── openrouter.py # OpenRouter integration
├── api/ # API layer
│ └── routes/ # API endpoint definitions
├── attestation/ # TEE attestation implementation
│ ├── vtpm_attestation.py # vTPM client
│ └── vtpm_validation.py # Token validation
├── prompts/ # Prompt engineering templates
│ └── templates.py # Different prompt strategies
├── telegram/ # Telegram bot implementation
│ └── service.py # Telegram service logic
├── twitter/ # Twitter bot implementation
│ └── service.py # Twitter service logic
├── bot_manager.py # Bot orchestration
├── main.py # FastAPI application
├── settings.py # Configuration settings
└── tune_model.py # Model fine-tuning utilities
```

## 🚀 Deploy on TEE

Expand Down Expand Up @@ -171,3 +220,7 @@ If you encounter issues, follow these steps:

3. **Check Firewall Settings:**
Confirm that your instance is publicly accessible on port `80`.

## 💡 Next Steps

TODO
8 changes: 7 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,18 @@ dependencies = [
"structlog>=25.1.0",
"tweepy>=4.15.0",
"uvicorn>=0.34.0",
"aiohttp>=3.9.0",
"python-dotenv>=1.0.0",
"python-telegram-bot>=20.7",
]

[project.scripts]
start-compare = "flare_ai_social.compare:start"
start-tuning = "flare_ai_social.tune_model:start"
start-backend = "flare_ai_social.main:start"
start-twitter = "flare_ai_social.twitter:start"
start-telegram = "flare_ai_social.telegram:start"
start-bots = "flare_ai_social.bot_manager:start_bot_manager"

[build-system]
requires = ["hatchling"]
Expand Down Expand Up @@ -87,4 +93,4 @@ reportUnusedExpression = true
reportUnnecessaryTypeIgnoreComment = true
reportMatchNotExhaustive = true
reportImplicitOverride = true
reportShadowedImports = true
reportShadowedImports = true
8 changes: 2 additions & 6 deletions src/flare_ai_social/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
from flare_ai_social.ai import GeminiProvider
from flare_ai_social.api import ChatRouter, router
from flare_ai_social.attestation import Vtpm
from flare_ai_social.bot_manager import start_bot_manager

__all__ = [
"ChatRouter",
"GeminiProvider",
"Vtpm",
"router",
]
__all__ = ["ChatRouter", "GeminiProvider", "Vtpm", "router", "start_bot_manager"]
Loading
Loading