Flare AI Kit template for AI x DeFi (DeFAI).
-
Secure AI Execution
Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security. -
Built-in Chat UI
Interact with your AI via a TEE-served chat interface. -
Flare Blockchain and Wallet Integration
Perform token operations and generate wallets from within the TEE. -
Gemini 2.0 + over 300 LLMs supported
Utilize Google Gemini’s latest model with structured query support for advanced AI functionalities.
You can deploy Flare AI DeFAI using Docker (recommended) or set up the backend and frontend manually.
- Prepare the Environment File:
Rename.env.exampleto.envand update the variables accordingly.Tip: Set
SIMULATE_ATTESTATION=truefor local testing.
The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.
-
Build the Docker Image:
docker build -t flare-ai-defai . -
Run the Docker Container:
docker run -p 80:80 -it --env-file .env flare-ai-defai
-
Access the Frontend:
Open your browser and navigate to http://localhost:80 to interact with the Chat UI.
Flare AI DeFAI is composed of a Python-based backend and a JavaScript frontend. Follow these steps for manual setup:
-
Install Dependencies:
Use uv to install backend dependencies:uv sync --all-extras
-
Start the Backend:
The backend runs by default on0.0.0.0:8080:uv run start-backend
-
Install Dependencies:
In thechat-ui/directory, install the required packages using npm:cd chat-ui/ npm install -
Configure the Frontend:
Update the backend URL inchat-ui/src/App.jsfor testing:const BACKEND_ROUTE = "http://localhost:8080/api/routes/chat/";
Note: Remember to change
BACKEND_ROUTEback to'api/routes/chat/'after testing. -
Start the Frontend:
npm start
src/flare_ai_defai/
├── ai/ # AI Provider implementations
│ ├── base.py # Base AI provider interface
│ ├── gemini.py # Google Gemini integration
│ └── openrouter.py # OpenRouter integration
├── api/ # API layer
│ ├── middleware/ # Request/response middleware
│ └── routes/ # API endpoint definitions
├── attestation/ # TEE attestation
│ ├── vtpm_attestation.py # vTPM client
│ └── vtpm_validation.py # Token validation
├── blockchain/ # Blockchain operations
│ ├── explorer.py # Chain explorer client
│ └── flare.py # Flare network provider
├── prompts/ # AI system prompts & templates
│ ├── library.py # Prompt module library
│ ├── schemas.py # Schema definitions
│ ├── service.py # Prompt service module
│ └── templates.py # Prompt templates
├── exceptions.py # Custom errors
├── main.py # Primary entrypoint
└── settings.py # Configuration settings error
Deploy on a Confidential Space using AMD SEV.
-
Google Cloud Platform Account:
Access to theverifiable-ai-hackathonproject is required. -
Gemini API Key:
Ensure your Gemini API key is linked to the project. -
gcloud CLI:
Install and authenticate the gcloud CLI.
-
Set Environment Variables:
Update your.envfile with:TEE_IMAGE_REFERENCE=ghcr.io/flare-foundation/flare-ai-defai:main # Replace with your repo build image INSTANCE_NAME=<PROJECT_NAME-TEAM_NAME>
-
Load Environment Variables:
source .envReminder: Run the above command in every new shell session or after modifying
.env. On Windows, we recommend using git BASH to access commands likesource. -
Verify the Setup:
echo $TEE_IMAGE_REFERENCE # Expected output: Your repo build image
Run the following command:
gcloud compute instances create $INSTANCE_NAME \
--project=verifiable-ai-hackathon \
--zone=us-west1-b \
--machine-type=n2d-standard-2 \
--network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
--metadata=tee-image-reference=$TEE_IMAGE_REFERENCE,\
tee-container-log-redirect=true,\
tee-env-GEMINI_API_KEY=$GEMINI_API_KEY,\
tee-env-GEMINI_MODEL=$GEMINI_MODEL,\
tee-env-WEB3_PROVIDER_URL=$WEB3_PROVIDER_URL,\
tee-env-SIMULATE_ATTESTATION=false \
--maintenance-policy=MIGRATE \
--provisioning-model=STANDARD \
--service-account=confidential-sa@verifiable-ai-hackathon.iam.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--min-cpu-platform="AMD Milan" \
--tags=flare-ai,http-server,https-server \
--create-disk=auto-delete=yes,\
boot=yes,\
device-name=$INSTANCE_NAME,\
image=projects/confidential-space-images/global/images/confidential-space-debug-250100,\
mode=rw,\
size=11,\
type=pd-standard \
--shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any \
--confidential-compute-type=SEV-
After deployment, you should see an output similar to:
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS defai-team1 us-central1-c n2d-standard-2 10.128.0.18 34.41.127.200 RUNNING -
It may take a few minutes for Confidential Space to complete startup checks. You can monitor progress via the GCP Console logs. Click on Compute Engine → VM Instances (in the sidebar) → Select your instance → Serial port 1 (console).
When you see a message like:
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)the container is ready. Navigate to the external IP of the instance (visible in the VM Instances page) to access the Chat UI.
If you encounter issues, follow these steps:
-
Check Logs:
gcloud compute instances get-serial-port-output $INSTANCE_NAME --project=verifiable-ai-hackathon -
Verify API Key(s):
Ensure that all API Keys are set correctly (e.g.GEMINI_API_KEY). -
Check Firewall Settings:
Confirm that your instance is publicly accessible on port80.
Once your instance is running, access the Chat UI using its public IP address. Here are some example interactions to try:
- "Create an account for me"
- "Transfer 10 C2FLR to 0x000000000000000000000000000000000000dEaD"
- "Show me your remote attestation"
-
TLS Communication:
Implement RA-TLS for encrypted communication. -
Expanded Flare Ecosystem Support:
Below are several detailed project ideas demonstrating how the template can be used to build autonomous AI agents for Flare's DeFi ecosystem:
Implement a natural language command parser that translates user intent into specific protocol actions, e.g.:
"Swap 100 FLR to USDC and deposit as collateral on Kinetic" →
{
action: ["swap", "deposit"],
protocols: ["SparkDEX", "Kinetic"],
amounts: [100],
tokens: ["FLR", "USDC"]
}
-
Add cross-protocol optimization features:
- Automatic route splitting across DEXs for better prices
- Gas optimization by batching multiple transactions
- Yield optimization by comparing lending rates across protocols
-
Automated token swaps and integrations with Flare ecosystem applications:
Connect the DeFAI agent with the RAG from flare-ai-rag trained on datasets such as:
Use a transaction simulation framework such as Tenderly Simulator to show users the expected outcome of their transaction.