Skip to content

nikerzetic-aflabs/flare-ai-defai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Flare AI DeFAI

Flare AI Kit template for AI x DeFi (DeFAI).

πŸš€ Key Features

  • Secure AI Execution
    Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security.

  • Built-in Chat UI
    Interact with your AI via a TEE-served chat interface.

  • Flare Blockchain and Wallet Integration
    Perform token operations and generate wallets from within the TEE.

  • Gemini 2.0 + over 300 LLMs supported
    Utilize Google Gemini’s latest model with structured query support for advanced AI functionalities.

Artemis

🎯 Getting Started

You can deploy Flare AI DeFAI using Docker (recommended) or set up the backend and frontend manually.

Environment Setup

  1. Prepare the Environment File:
    Rename .env.example to .env and update the variables accordingly.

    Tip: Set SIMULATE_ATTESTATION=true for local testing.

Build using Docker (Recommended)

The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.

  1. Build the Docker Image:

    docker build -t flare-ai-defai .
  2. Run the Docker Container:

    docker run -p 80:80 -it --env-file .env flare-ai-defai
  3. Access the Frontend:
    Open your browser and navigate to http://localhost:80 to interact with the Chat UI.

πŸ›  Build Manually

Flare AI DeFAI is composed of a Python-based backend and a JavaScript frontend. Follow these steps for manual setup:

Backend Setup

  1. Install Dependencies:
    Use uv to install backend dependencies:

    uv sync --all-extras
  2. Start the Backend:
    The backend runs by default on 0.0.0.0:8080:

    uv run start-backend

Frontend Setup

  1. Install Dependencies:
    In the chat-ui/ directory, install the required packages using npm:

    cd chat-ui/
    npm install
  2. Configure the Frontend:
    Update the backend URL in chat-ui/src/App.js for testing:

    const BACKEND_ROUTE = "http://localhost:8080/api/routes/chat/";

    Note: Remember to change BACKEND_ROUTE back to 'api/routes/chat/' after testing.

  3. Start the Frontend:

    npm start

πŸ“ Repo Structure

src/flare_ai_defai/
β”œβ”€β”€ ai/                     # AI Provider implementations
β”‚   β”œβ”€β”€ base.py            # Base AI provider interface
β”‚   β”œβ”€β”€ gemini.py          # Google Gemini integration
β”‚   └── openrouter.py      # OpenRouter integration
β”œβ”€β”€ api/                    # API layer
β”‚   β”œβ”€β”€ middleware/        # Request/response middleware
β”‚   └── routes/           # API endpoint definitions
β”œβ”€β”€ attestation/           # TEE attestation
β”‚   β”œβ”€β”€ vtpm_attestation.py   # vTPM client
β”‚   └── vtpm_validation.py    # Token validation
β”œβ”€β”€ blockchain/              # Blockchain operations
β”‚   β”œβ”€β”€ explorer.py        # Chain explorer client
β”‚   └── flare.py          # Flare network provider
β”œβ”€β”€ prompts/              # AI system prompts & templates
β”‚    β”œβ”€β”€ library.py        # Prompt module library
β”‚    β”œβ”€β”€ schemas.py        # Schema definitions
β”‚    β”œβ”€β”€ service.py        # Prompt service module
β”‚    └── templates.py       # Prompt templates
β”œβ”€β”€ exceptions.py      # Custom errors
β”œβ”€β”€ main.py          # Primary entrypoint
└── settings.py       # Configuration settings error

πŸš€ Deploy on TEE

Deploy on a Confidential Space using AMD SEV.

Prerequisites

Environment Configuration

  1. Set Environment Variables:
    Update your .env file with:

    TEE_IMAGE_REFERENCE=ghcr.io/flare-foundation/flare-ai-defai:main  # Replace with your repo build image
    INSTANCE_NAME=<PROJECT_NAME-TEAM_NAME>
  2. Load Environment Variables:

    source .env

    Reminder: Run the above command in every new shell session or after modifying .env. On Windows, we recommend using git BASH to access commands like source.

  3. Verify the Setup:

    echo $TEE_IMAGE_REFERENCE # Expected output: Your repo build image

Deploying to Confidential Space

Run the following command:

gcloud compute instances create $INSTANCE_NAME \
  --project=verifiable-ai-hackathon \
  --zone=us-west1-b \
  --machine-type=n2d-standard-2 \
  --network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
  --metadata=tee-image-reference=$TEE_IMAGE_REFERENCE,\
tee-container-log-redirect=true,\
tee-env-GEMINI_API_KEY=$GEMINI_API_KEY,\
tee-env-GEMINI_MODEL=$GEMINI_MODEL,\
tee-env-WEB3_PROVIDER_URL=$WEB3_PROVIDER_URL,\
tee-env-SIMULATE_ATTESTATION=false \
  --maintenance-policy=MIGRATE \
  --provisioning-model=STANDARD \
  --service-account=confidential-sa@verifiable-ai-hackathon.iam.gserviceaccount.com \
  --scopes=https://www.googleapis.com/auth/cloud-platform \
  --min-cpu-platform="AMD Milan" \
  --tags=flare-ai,http-server,https-server \
  --create-disk=auto-delete=yes,\
boot=yes,\
device-name=$INSTANCE_NAME,\
image=projects/confidential-space-images/global/images/confidential-space-debug-250100,\
mode=rw,\
size=11,\
type=pd-standard \
  --shielded-secure-boot \
  --shielded-vtpm \
  --shielded-integrity-monitoring \
  --reservation-affinity=any \
  --confidential-compute-type=SEV

Post-deployment

  1. After deployment, you should see an output similar to:

    NAME          ZONE           MACHINE_TYPE    PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
    defai-team1   us-central1-c  n2d-standard-2               10.128.0.18  34.41.127.200  RUNNING
    
  2. It may take a few minutes for Confidential Space to complete startup checks. You can monitor progress via the GCP Console logs. Click on Compute Engine β†’ VM Instances (in the sidebar) β†’ Select your instance β†’ Serial port 1 (console).

    When you see a message like:

    INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
    

    the container is ready. Navigate to the external IP of the instance (visible in the VM Instances page) to access the Chat UI.

πŸ”§ Troubleshooting

If you encounter issues, follow these steps:

  1. Check Logs:

    gcloud compute instances get-serial-port-output $INSTANCE_NAME --project=verifiable-ai-hackathon
  2. Verify API Key(s):
    Ensure that all API Keys are set correctly (e.g. GEMINI_API_KEY).

  3. Check Firewall Settings:
    Confirm that your instance is publicly accessible on port 80.

πŸ’‘ Next Steps

Once your instance is running, access the Chat UI using its public IP address. Here are some example interactions to try:

  • "Create an account for me"
  • "Transfer 10 C2FLR to 0x000000000000000000000000000000000000dEaD"
  • "Show me your remote attestation"

Future Upgrades

  • TLS Communication:
    Implement RA-TLS for encrypted communication.

  • Expanded Flare Ecosystem Support:

Example Use Cases & Project Ideas

Below are several detailed project ideas demonstrating how the template can be used to build autonomous AI agents for Flare's DeFi ecosystem:

NLP interface for Flare ecosystem

Implement a natural language command parser that translates user intent into specific protocol actions, e.g.:

"Swap 100 FLR to USDC and deposit as collateral on Kinetic" β†’
{
  action: ["swap", "deposit"],
  protocols: ["SparkDEX", "Kinetic"],
  amounts: [100],
  tokens: ["FLR", "USDC"]
}
  • Add cross-protocol optimization features:

    • Automatic route splitting across DEXs for better prices
    • Gas optimization by batching multiple transactions
    • Yield optimization by comparing lending rates across protocols
  • Automated token swaps and integrations with Flare ecosystem applications:

RAG Knowledge

Connect the DeFAI agent with the RAG from flare-ai-rag trained on datasets such as:

Transaction simulation

Use a transaction simulation framework such as Tenderly Simulator to show users the expected outcome of their transaction.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors