Skip to content

flare-foundation/flare-ai-defai

Repository files navigation

Flare AI DeFAI

Flare AI Kit template for AI x DeFi (DeFAI).

🚀 Key Features

  • Secure AI Execution
    Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security.

  • Built-in Chat UI
    Interact with your AI via a TEE-served chat interface.

  • Flare Blockchain and Wallet Integration
    Perform token operations and generate wallets from within the TEE.

  • Gemini 2.0 + over 300 LLMs supported
    Utilize Google Gemini’s latest model with structured query support for advanced AI functionalities.

Artemis

🎯 Getting Started

You can deploy Flare AI DeFAI using Docker (recommended) or set up the backend and frontend manually.

Environment Setup

  1. Prepare the Environment File:
    Rename .env.example to .env and update the variables accordingly.

    Tip: Set SIMULATE_ATTESTATION=true for local testing.

Build using Docker (Recommended)

The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.

  1. Build the Docker Image:

    docker build -t flare-ai-defai .
  2. Run the Docker Container:

    docker run -p 80:80 -it --env-file .env flare-ai-defai
  3. Access the Frontend:
    Open your browser and navigate to http://localhost:80 to interact with the Chat UI.

🛠 Build Manually

Flare AI DeFAI is composed of a Python-based backend and a JavaScript frontend. Follow these steps for manual setup:

Backend Setup

  1. Install Dependencies:
    Use uv to install backend dependencies:

    uv sync --all-extras
  2. Start the Backend:
    The backend runs by default on 0.0.0.0:8080:

    uv run start-backend

Frontend Setup

  1. Install Dependencies:
    In the chat-ui/ directory, install the required packages using npm:

    cd chat-ui/
    npm install
  2. Configure the Frontend:
    Update the backend URL in chat-ui/src/App.js for testing:

    const BACKEND_ROUTE = "http://localhost:8080/api/routes/chat/";

    Note: Remember to change BACKEND_ROUTE back to 'api/routes/chat/' after testing.

  3. Start the Frontend:

    npm start

📁 Repo Structure

src/flare_ai_defai/
├── ai/                     # AI Provider implementations
│   ├── base.py            # Base AI provider interface
│   ├── gemini.py          # Google Gemini integration
│   └── openrouter.py      # OpenRouter integration
├── api/                    # API layer
│   ├── middleware/        # Request/response middleware
│   └── routes/           # API endpoint definitions
├── attestation/           # TEE attestation
│   ├── vtpm_attestation.py   # vTPM client
│   └── vtpm_validation.py    # Token validation
├── blockchain/              # Blockchain operations
│   ├── explorer.py        # Chain explorer client
│   └── flare.py          # Flare network provider
├── prompts/              # AI system prompts & templates
│    ├── library.py        # Prompt module library
│    ├── schemas.py        # Schema definitions
│    ├── service.py        # Prompt service module
│    └── templates.py       # Prompt templates
├── exceptions.py      # Custom errors
├── main.py          # Primary entrypoint
└── settings.py       # Configuration settings error

🚀 Deploy on TEE

Deploy on a Confidential Space using AMD SEV.

Prerequisites

Environment Configuration

  1. Set Environment Variables:
    Update your .env file with:

    TEE_IMAGE_REFERENCE=ghcr.io/flare-foundation/flare-ai-defai:main  # Replace with your repo build image
    INSTANCE_NAME=<PROJECT_NAME-TEAM_NAME>
  2. Load Environment Variables:

    source .env

    Reminder: Run the above command in every new shell session or after modifying .env. On Windows, we recommend using git BASH to access commands like source.

  3. Verify the Setup:

    echo $TEE_IMAGE_REFERENCE # Expected output: Your repo build image

Deploying to Confidential Space

Run the following command:

gcloud compute instances create $INSTANCE_NAME \
  --project=verifiable-ai-hackathon \
  --zone=us-west1-b \
  --machine-type=n2d-standard-2 \
  --network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
  --metadata=tee-image-reference=$TEE_IMAGE_REFERENCE,\
tee-container-log-redirect=true,\
tee-env-GEMINI_API_KEY=$GEMINI_API_KEY,\
tee-env-GEMINI_MODEL=$GEMINI_MODEL,\
tee-env-WEB3_PROVIDER_URL=$WEB3_PROVIDER_URL,\
tee-env-SIMULATE_ATTESTATION=false \
  --maintenance-policy=MIGRATE \
  --provisioning-model=STANDARD \
  --service-account=confidential-sa@verifiable-ai-hackathon.iam.gserviceaccount.com \
  --scopes=https://www.googleapis.com/auth/cloud-platform \
  --min-cpu-platform="AMD Milan" \
  --tags=flare-ai,http-server,https-server \
  --create-disk=auto-delete=yes,\
boot=yes,\
device-name=$INSTANCE_NAME,\
image=projects/confidential-space-images/global/images/confidential-space-debug-250100,\
mode=rw,\
size=11,\
type=pd-standard \
  --shielded-secure-boot \
  --shielded-vtpm \
  --shielded-integrity-monitoring \
  --reservation-affinity=any \
  --confidential-compute-type=SEV

Post-deployment

  1. After deployment, you should see an output similar to:

    NAME          ZONE           MACHINE_TYPE    PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
    defai-team1   us-central1-c  n2d-standard-2               10.128.0.18  34.41.127.200  RUNNING
    
  2. It may take a few minutes for Confidential Space to complete startup checks. You can monitor progress via the GCP Console logs. Click on Compute EngineVM Instances (in the sidebar) → Select your instanceSerial port 1 (console).

    When you see a message like:

    INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
    

    the container is ready. Navigate to the external IP of the instance (visible in the VM Instances page) to access the Chat UI.

🔧 Troubleshooting

If you encounter issues, follow these steps:

  1. Check Logs:

    gcloud compute instances get-serial-port-output $INSTANCE_NAME --project=verifiable-ai-hackathon
  2. Verify API Key(s):
    Ensure that all API Keys are set correctly (e.g. GEMINI_API_KEY).

  3. Check Firewall Settings:
    Confirm that your instance is publicly accessible on port 80.

💡 Next Steps

Once your instance is running, access the Chat UI using its public IP address. Here are some example interactions to try:

  • "Create an account for me"
  • "Transfer 10 C2FLR to 0x000000000000000000000000000000000000dEaD"
  • "Show me your remote attestation"

Future Upgrades

  • TLS Communication:
    Implement RA-TLS for encrypted communication.

  • Expanded Flare Ecosystem Support:

Example Use Cases & Project Ideas

Below are several detailed project ideas demonstrating how the template can be used to build autonomous AI agents for Flare's DeFi ecosystem:

NLP interface for Flare ecosystem

Implement a natural language command parser that translates user intent into specific protocol actions, e.g.:

"Swap 100 FLR to USDC and deposit as collateral on Kinetic" →
{
  action: ["swap", "deposit"],
  protocols: ["SparkDEX", "Kinetic"],
  amounts: [100],
  tokens: ["FLR", "USDC"]
}
  • Add cross-protocol optimization features:

    • Automatic route splitting across DEXs for better prices
    • Gas optimization by batching multiple transactions
    • Yield optimization by comparing lending rates across protocols
  • Automated token swaps and integrations with Flare ecosystem applications:

RAG Knowledge

Connect the DeFAI agent with the RAG from flare-ai-rag trained on datasets such as:

Transaction simulation

Use a transaction simulation framework such as Tenderly Simulator to show users the expected outcome of their transaction.

About

Flare AI Kit template for AI x DeFi (DeFAI) ☀️

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 2

  •  
  •