1- # Flare AI Social
1+ # 🤖 Flare AI Social
22
33A robust, extensible social media bot framework that monitors and automatically responds to mentions across multiple platforms using AI-powered responses.
44
5- ## Overview
5+ ## 🚀 Key Features
66
7- Flare AI Social is a Python-based system that connects AI capabilities with social media platforms. The bot monitors designated accounts for mentions, processes them using AI, and automatically responds with contextually appropriate replies.
7+ - ** Multi-platform Support** : Monitor mentions and messages across Twitter/X and Telegram
8+ - ** AI-powered Responses** : Generate contextually relevant replies using Google's Gemini AI
9+ - ** Model Fine-tuning** : Support for custom-tuned models with example dataset
10+ - ** Rate Limit Handling** : Built-in exponential backoff and retry mechanisms
11+ - ** TEE Integration** : Secure execution in Trusted Execution Environment
812
9- Currently supported platforms:
13+ ## 🏗️ Project Structure
1014
11- - Twitter/X
12- - Telegram
13-
14- ## Features
15-
16- - ** Multi-platform monitoring** : Monitor mentions/keywords across Twitter and messages in Telegram
17- - ** AI-powered responses** : Generate contextually relevant replies using Google's Gemini AI
18- - ** Rate limit handling** : Built-in exponential backoff and retry mechanisms
19- - ** Model customization** : Support for custom-tuned AI models
20-
21- ## Architecture
22-
23- The system consists of three main components:
24-
25- 1 . ** Bot Manager** : Coordinates all bots and handles initialization, monitoring, and shutdown
26- 2 . ** Platform-specific Bots** : Implementations for Twitter and Telegram
27- 3 . ** AI Provider** : Interface to the AI model (currently supports Google Gemini)
28-
29- ## Prerequisites
30-
31- - Python 3.12+
32- - Google Gemini API key
33- - Twitter/X API credentials (API key, secret, access token, etc.)
34- - RapidAPI key for Twitter search functionality
35- - Telegram Bot API token (if using Telegram)
36-
37- ## Installation
38-
39- ``` bash
40- # Clone the repository
41- git clone repo_url
42-
43- # Install dependencies uv sync --all-extras
44- uv sync --all-extras
45-
46- # Create .env file
47- cp .env.example .env
48- # Edit .env with your API keys and configuration
4915```
50-
51- ## Configuration
52-
53- Configure the bot by editing the ` .env ` file or setting environment variables:
54-
55- ```
56- # Google Gemini AI
57- GEMINI_API_KEY=your_gemini_api_key
58- TUNED_MODEL_NAME=pugo-hilion
59-
60- # Twitter/X Bot Settings
61- ENABLE_TWITTER=true
62- X_API_KEY=your_twitter_api_key
63- X_API_KEY_SECRET=your_twitter_api_secret
64- X_ACCESS_TOKEN=your_twitter_access_token
65- X_ACCESS_TOKEN_SECRET=your_twitter_access_token_secret
66- RAPIDAPI_KEY=your_rapidapi_key
67- RAPIDAPI_HOST=twitter241.p.rapidapi.com
68- TWITTER_ACCOUNTS_TO_MONITOR=@YourAccount,@AnotherAccount,keyword
69- TWITTER_POLLING_INTERVAL=60
70-
71- # Telegram Bot Settings
72- ENABLE_TELEGRAM=true
73- TELEGRAM_API_TOKEN=your_telegram_bot_token
74- TELEGRAM_ALLOWED_USERS= # empty to allow all accounts to interact with the bot
75- TELEGRAM_POLLING_INTERVAL=5
76-
77- # Fine-tuning Parameters
78- TUNING_SOURCE_MODEL=models/gemini-1.5-flash-001-tuning
79- TUNING_EPOCH_COUNT=100
80- TUNING_BATCH_SIZE=4
81- TUNING_LEARNING_RATE=0.001
16+ src/flare_ai_social/
17+ ├── ai/ # AI Provider implementations
18+ │ ├── base.py # Base AI provider abstraction
19+ │ ├── gemini.py # Google Gemini integration
20+ │ └── openrouter.py # OpenRouter integration
21+ ├── api/ # API layer
22+ │ └── routes/ # API endpoint definitions
23+ ├── attestation/ # TEE attestation implementation
24+ │ ├── vtpm_attestation.py # vTPM client
25+ │ └── vtpm_validation.py # Token validation
26+ ├── prompts/ # Prompt engineering templates
27+ │ └── templates.py # Different prompt strategies
28+ ├── telegram/ # Telegram bot implementation
29+ │ └── service.py # Telegram service logic
30+ ├── twitter/ # Twitter bot implementation
31+ │ └── service.py # Twitter service logic
32+ ├── bot_manager.py # Bot orchestration
33+ ├── main.py # FastAPI application
34+ ├── settings.py # Configuration settings
35+ └── tune_model.py # Model fine-tuning utilities
8236```
8337
8438## 🏗️ Build & Run Instructions
8539
86- ### Running the Social Media Bots
87-
88- Start the bots with UV:
40+ ### Fine-tuning a Model
8941
90- ``` bash
91- uv run start-bots
92- ```
93-
94- ### Fine-tuning a Model Over a Dataset
95-
96- 1 . ** Prepare the Environment File:**
97- Rename ` .env.example ` to ` .env ` and update the variables accordingly.
98- Some parameters are specific to model fine-tuning:
42+ 1 . ** Prepare Environment File** :
43+ Rename ` .env.example ` to ` .env ` and update these model fine-tuning parameters:
9944
10045 | Parameter | Description | Default |
10146 | --------------------- | -------------------------------------------------------------------------- | ------------------------------------ |
10247 | ` tuned_model_name ` | Name of the newly tuned model. | ` pugo-hilion ` |
10348 | ` tuning_source_model ` | Name of the foundational model to tune on. | ` models/gemini-1.5-flash-001-tuning ` |
104- | ` epoch_count ` | Number of tuning epochs to run. An epoch is a pass over the whole dataset. | ` 100 ` |
49+ | ` epoch_count ` | Number of tuning epochs to run. An epoch is a pass over the whole dataset. | ` 30 ` |
10550 | ` batch_size ` | Number of examples to use in each training batch. | ` 4 ` |
10651 | ` learning_rate ` | Step size multiplier for the gradient updates. | ` 0.001 ` |
10752
108- 2 . ** Prepare a dataset:**
109- An example dataset is provided in ` src/data/training_data.json ` , which consists of tweet from
110- [ Hugo Philion's X] ( https://x.com/HugoPhilion ) account. You can use any publicly available dataset
111- for model fine-tuning.
112-
113- 3 . ** Tune a new model**
114- Set the name of the new tuned model in ` src/flare_ai_social/tune_model.py ` , then:
53+ 2 . ** Prepare Dataset** :
54+ - Example dataset provided in ` src/data/training_data.json `
55+ - Based on Hugo Philion's X/Twitter feed
56+ - Compatible with any public dataset
11557
58+ 3 . ** Tune Model** :
11659 ``` bash
11760 uv run start-tuning
11861 ```
@@ -121,78 +64,114 @@ uv run start-bots
12164 After tuning in complete, a training loss PNG will be saved in the root folder corresponding to the new model.
12265 Ideally the loss should minimize to near 0 after several training epochs.
12366
124- 5 . ** Test the new model**
125- Select the new tuned model and test it against a set of prompts:
67+ ![ pugo-hilion_mean_loss] ( https://github.com/user-attachments/assets/f6c4d82b-678a-4ae5-bfb7-39dc59e1103d )
68+
69+ 5 . ** Test Model** :
70+ Select the new tuned model and compare it against a set of prompting techniques (zero-shot, few-shot and chain-of-thought):
12671
12772 ``` bash
128- uv run start-social
73+ uv run start-compare
12974 ```
75+
76+ ### Running Social Bots
13077
131- ## Twitter/X Bot
132-
133- The Twitter bot:
78+ 1 . ** Configure Platforms:**
79+ - Set up Twitter/X API credentials
80+ - Configure Telegram bot token
81+ - Enable/disable platforms as needed
13482
135- 1 . Monitors mentions of specified accounts using the RapidAPI Twitter search endpoint
136- 2 . Processes mentions to identify ones within the configured time window
137- 3 . Generates AI responses for valid mentions
138- 4 . Replies to mentions with the generated content
83+ 2 . ** Start Bots:**
13984
140- ### Rate Limits
85+ ``` bash
86+ uv run start-bots
87+ ```
88+
89+ ### Build with Docker
14190
142- The Twitter component implements strategies to handle rate limits :
91+ After model training :
14392
144- - Exponential backoff for retries
145- - Sequential (rather than concurrent) account monitoring
146- - Configurable polling intervals
147- - Comprehensive error handling
93+ 1 . ** Build Image ** :
94+ ``` bash
95+ docker build -t flare-ai-social .
96+ ```
14897
149- ## Telegram Bot
98+ 2 . ** Run Container** :
99+ ``` bash
100+ docker run -p 80:80 -it --env-file .env flare-ai-social
101+ ```
150102
151- The Telegram bot:
103+ 3 . ** Access UI ** : Navigate to ` http://localhost:80 `
152104
153- 1 . Listens for incoming messages
154- 2 . Optionally filters messages based on allowed user IDs
155- 3 . Processes messages through the AI provider
156- 4 . Replies with generated responses
105+ ## 🚀 Deploy on TEE
157106
158- ## AI Provider
107+ Deploy on Confidential Space Instance (AMD SEV/Intel TDX) for hardware-backed security.
159108
160- The system uses Google's Gemini AI models with:
109+ ### Prerequisites
161110
162- - Support for default models (gemini-1.5-flash)
163- - Optional integration with custom-tuned models
164- - Fallback mechanisms if tuned models are unavailable
165- - Configurable system prompts for controlling AI behavior
111+ - GCP account with ` verifiable-ai-hackathon ` access
112+ - [ Gemini API key] ( https://aistudio.google.com/app/apikey )
113+ - [ gcloud CLI] ( https://cloud.google.com/sdk/docs/install ) installed
166114
167- ## Extending the System
115+ ### Environment Setup
168116
169- ### Adding New Social Platforms
117+ 1 . ** Configure Environment** :
118+ ``` bash
119+ # In .env file
120+ TEE_IMAGE_REFERENCE=ghcr.io/flare-foundation/flare-ai-social:main
121+ INSTANCE_NAME=< PROJECT_NAME-TEAM_NAME>
122+ ```
170123
171- To add a new platform:
124+ 2 . ** Load Variables** :
125+ ``` bash
126+ source .env
127+ ```
172128
173- 1 . Create a new class that implements the platform's API
174- 2 . Follow the pattern established by ` TwitterBot ` and ` TelegramBot `
175- 3 . Add the new bot to ` BotManager `
129+ ### Deployment
176130
177- ### Using Different AI Models
131+ Deploy to Confidential Space (AMD SEV):
178132
179- The system uses a provider pattern for AI integration:
133+ ``` bash
134+ gcloud compute instances create $INSTANCE_NAME \
135+ --project=verifiable-ai-hackathon \
136+ --zone=us-central1-c \
137+ --machine-type=n2d-standard-2 \
138+ --network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
139+ --metadata=tee-image-reference=$TEE_IMAGE_REFERENCE ,\
140+ tee-container-log-redirect=true,\
141+ tee-env-GEMINI_API_KEY=$GEMINI_API_KEY ,\
142+ tee-env-GEMINI_MODEL=$GEMINI_MODEL ,\
143+ tee-env-WEB3_PROVIDER_URL=$WEB3_PROVIDER_URL ,\
144+ tee-env-SIMULATE_ATTESTATION=false \
145+ --maintenance-policy=MIGRATE \
146+ --provisioning-model=STANDARD \
147+ --service-account=confidential-sa@verifiable-ai-hackathon.iam.gserviceaccount.com \
148+ --scopes=https://www.googleapis.com/auth/cloud-platform \
149+ --min-cpu-platform=" AMD Milan" \
150+ --tags=flare-ai,http-server,https-server \
151+ --create-disk=auto-delete=yes,\
152+ boot=yes,\
153+ device-name=$INSTANCE_NAME ,\
154+ image=projects/confidential-space-images/global/images/confidential-space-debug-250100,\
155+ mode=rw,\
156+ size=11,\
157+ type=pd-standard \
158+ --shielded-secure-boot \
159+ --shielded-vtpm \
160+ --shielded-integrity-monitoring \
161+ --reservation-affinity=any \
162+ --confidential-compute-type=SEV
163+ ```
180164
181- 1 . Create a new class that implements the ` BaseAIProvider ` interface
182- 2 . Implement the ` generate ` method to interface with your AI service
183- 3 . Configure the ` BotManager ` to use your provider
165+ ### Post-deployment
184166
185- ## Troubleshooting
167+ Monitor startup in [ GCP Console ] ( https://console.cloud.google.com/welcome?project=verifiable-ai-hackathon ) under ** Serial port 1 ** . When you see:
186168
187- ### Twitter Rate Limits
169+ ``` plaintext
170+ INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
171+ ```
188172
189- - Increase ` TWITTER_POLLING_INTERVAL ` to reduce API calls
190- - Reduce the number of monitored accounts
191- - Upgrade your RapidAPI plan for higher limits
173+ Access the UI via the instance's external IP.
192174
193- ### Connection Timeouts
175+ ## 💡 Example Use Cases & Next Steps
194176
195- - Check network connectivity
196- - Verify API credentials
197- - Ensure clock synchronization for OAuth
198- - Monitor Twitter API status
177+ TODO: Add example use cases and next steps for the project.
0 commit comments