Skip to content

simonjonsson87/flare-gcloud-hackathon-defai

Repository files navigation

Quince Finance Logo

Quince Finance

Welcome to Quince Financeβ€”a webapp built as a solo contribution for the Flare x Google Verifiable AI Hackathon, Track 3: AI x DeFi. It makes DeFi on the Flare ecosystem so easy, even your tech-shy uncle could use it! Quince combines Google Cloud’s Confidential Computing with Flare’s enshrined oracles for a secure, user-friendly experience. Check it out at quincefinance.xyz.

What It Does

Quince Finance lets users dive into Flare’s DeFi world with minimal fuss. Log in with your Google ID (verified by both frontend and backend), and your wallet is securely stored in a Trusted Execution Environment (TEE). From there, a friendly AI-powered chatbot takes the wheelβ€”use plain language to swap tokens, stake assets, or check balances. No crypto PhD required!

Key Features

  • Simple Sign-In: Connect with Google for fast, secure access.
  • AI Chatbot: Tell it what you want in everyday wordsβ€”e.g., β€œSwap 10 FLR for SGB.”
  • Voice Commands & Read-Aloud: Hands-free DeFi with accessibility in mind.
  • Asset Dashboard: See all your balances in one clean pane.
  • Top-Notch Security: TLS-enabled, dual Google ID verification, and TEE-protected wallets.

Why It’s Cool

Quince is built for everyone. Its intuitive design flattens the DeFi learning curve, while its security (thanks to TEE and TLS) keeps your funds safe. Perfect for newbies and pros alike!

Getting Started (user)

Stay tuned for setup instructions as this hackathon project evolves. For now, visit quincefinance.xyz to see it in action!

🎯 Getting Started (dev)

You can deploy Flare AI DeFAI using Docker (recommended) or set up the backend and frontend manually.

Environment Setup

  1. Prepare the Environment File:
    Rename .env.example to .env and update the variables accordingly.

    Tip: Set SIMULATE_ATTESTATION=true for local testing.

Build using Docker (Recommended)

The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.

  1. Build the Docker Image:

    docker build -t flare-ai-defai .
  2. Run the Docker Container:

    docker run -p 80:80 -it --env-file .env flare-ai-defai
  3. Access the Frontend:
    Open your browser and navigate to http://localhost:80 to interact with the Chat UI.

πŸ›  Build Manually

Flare AI DeFAI is composed of a Python-based backend and a JavaScript frontend. Follow these steps for manual setup:

Backend Setup

  1. Install Dependencies:
    Use uv to install backend dependencies:

    uv sync --all-extras
  2. Start the Backend:
    The backend runs by default on 0.0.0.0:8080:

    uv run start-backend

Frontend Setup

  1. Install Dependencies:
    In the chat-ui/ directory, install the required packages using npm:

    cd chat-ui/
    npm install
  2. Configure the Frontend:
    Update the backend URL in chat-ui/src/App.js for testing:

    const BACKEND_ROUTE = "http://localhost:8080/api/routes/chat/";

    Note: Remember to change BACKEND_ROUTE back to 'api/routes/chat/' after testing.

  3. Start the Frontend:

    npm start

πŸ“ Repo Structure

src/flare_ai_defai/
β”œβ”€β”€ ai/                     # AI Provider implementations
β”‚   β”œβ”€β”€ base.py            # Base AI provider interface
β”‚   β”œβ”€β”€ gemini.py          # Google Gemini integration
β”‚   └── openrouter.py      # OpenRouter integration
β”œβ”€β”€ api/                    # API layer
β”‚   β”œβ”€β”€ middleware/        # Request/response middleware
β”‚   └── routes/           # API endpoint definitions
β”œβ”€β”€ attestation/           # TEE attestation
β”‚   β”œβ”€β”€ vtpm_attestation.py   # vTPM client
β”‚   └── vtpm_validation.py    # Token validation
β”œβ”€β”€ blockchain/              # Blockchain operations
β”‚   β”œβ”€β”€ explorer.py        # Chain explorer client
β”‚   └── flare.py          # Flare network provider
β”œβ”€β”€ prompts/              # AI system prompts & templates
β”‚    β”œβ”€β”€ library.py        # Prompt module library
β”‚    β”œβ”€β”€ schemas.py        # Schema definitions
β”‚    β”œβ”€β”€ service.py        # Prompt service module
β”‚    └── templates.py       # Prompt templates
β”œβ”€β”€ exceptions.py      # Custom errors
β”œβ”€β”€ main.py          # Primary entrypoint
└── settings.py       # Configuration settings error

πŸš€ Deploy on TEE

Deploy on a Confidential Space using AMD SEV.

Prerequisites

Environment Configuration

  1. Set Environment Variables:
    Update your .env file with:

    TEE_IMAGE_REFERENCE=ghcr.io/flare-foundation/flare-ai-defai:main  # Replace with your repo build image
    INSTANCE_NAME=<PROJECT_NAME-TEAM_NAME>
  2. Load Environment Variables:

    source .env

    Reminder: Run the above command in every new shell session or after modifying .env. On Windows, we recommend using git BASH to access commands like source.

  3. Verify the Setup:

    echo $TEE_IMAGE_REFERENCE # Expected output: Your repo build image

Deploying to Confidential Space

Run the following command:

gcloud compute instances create $INSTANCE_NAME \
  --project=verifiable-ai-hackathon \
  --zone=us-central1-c \
  --machine-type=n2d-standard-2 \
  --network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
  --metadata=tee-image-reference=$TEE_IMAGE_REFERENCE,\
tee-container-log-redirect=true,\
tee-env-GEMINI_API_KEY=$GEMINI_API_KEY,\
tee-env-GEMINI_MODEL=$GEMINI_MODEL,\
tee-env-WEB3_PROVIDER_URL=$WEB3_PROVIDER_URL,\
tee-env-SIMULATE_ATTESTATION=false \
  --maintenance-policy=MIGRATE \
  --provisioning-model=STANDARD \
  --service-account=confidential-sa@verifiable-ai-hackathon.iam.gserviceaccount.com \
  --scopes=https://www.googleapis.com/auth/cloud-platform \
  --min-cpu-platform="AMD Milan" \
  --tags=flare-ai,http-server,https-server \
  --create-disk=auto-delete=yes,\
boot=yes,\
device-name=$INSTANCE_NAME,\
image=projects/confidential-space-images/global/images/confidential-space-debug-250100,\
mode=rw,\
size=11,\
type=pd-standard \
  --shielded-secure-boot \
  --shielded-vtpm \
  --shielded-integrity-monitoring \
  --reservation-affinity=any \
  --confidential-compute-type=SEV

Post-deployment

  1. After deployment, you should see an output similar to:

    NAME          ZONE           MACHINE_TYPE    PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
    defai-team1   us-central1-c  n2d-standard-2               10.128.0.18  34.41.127.200  RUNNING
    
  2. It may take a few minutes for Confidential Space to complete startup checks. You can monitor progress via the GCP Console logs. Click on Compute Engine β†’ VM Instances (in the sidebar) β†’ Select your instance β†’ Serial port 1 (console).

    When you see a message like:

    INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
    

    the container is ready. Navigate to the external IP of the instance (visible in the VM Instances page) to access the Chat UI.

πŸ”§ Troubleshooting

If you encounter issues, follow these steps:

  1. Check Logs:

    gcloud compute instances get-serial-port-output $INSTANCE_NAME --project=verifiable-ai-hackathon
  2. Verify API Key(s):
    Ensure that all API Keys are set correctly (e.g. GEMINI_API_KEY).

  3. Check Firewall Settings:
    Confirm that your instance is publicly accessible on port 80.

πŸ’‘ Next Steps

Once your instance is running, access the Chat UI using its public IP address. Here are some example interactions to try:

  • "Create an account for me"
  • "Transfer 10 C2FLR to 0x000000000000000000000000000000000000dEaD"
  • "Show me your remote attestation"

Future Upgrades

  • TLS Communication:
    Implement RA-TLS for encrypted communication.

  • Expanded Flare Ecosystem Support:

Example Use Cases & Project Ideas

Below are several detailed project ideas demonstrating how the template can be used to build autonomous AI agents for Flare's DeFi ecosystem:

NLP interface for Flare ecosystem

Implement a natural language command parser that translates user intent into specific protocol actions, e.g.:

"Swap 100 FLR to USDC and deposit as collateral on Kinetic" β†’
{
  action: ["swap", "deposit"],
  protocols: ["SparkDEX", "Kinetic"],
  amounts: [100],
  tokens: ["FLR", "USDC"]
}
  • Add cross-protocol optimization features:

    • Automatic route splitting across DEXs for better prices
    • Gas optimization by batching multiple transactions
    • Yield optimization by comparing lending rates across protocols
  • Automated token swaps and integrations with Flare ecosystem applications:

RAG Knowledge

Connect the DeFAI agent with the RAG from flare-ai-rag trained on datasets such as:

Transaction simulation

Use a transaction simulation framework such as Tenderly Simulator to show users the expected outcome of their transaction.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages