Skip to content

Commit a4b5a7f

Browse files
committed
fix(readme): improve instructions
1 parent d2a2164 commit a4b5a7f

File tree

1 file changed

+70
-106
lines changed

1 file changed

+70
-106
lines changed

README.md

Lines changed: 70 additions & 106 deletions
Original file line numberDiff line numberDiff line change
@@ -5,64 +5,63 @@ Flare AI Kit template for AI x DeFi (DeFAI).
55
## 🚀 Key Features
66

77
- **Secure AI Execution**
8-
Runs within a Trusted Execution Environment (TEE) with remote attestation support for enhanced security.
8+
Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security.
99

1010
- **Built-in Chat UI**
11-
Interact with your AI securely—the chat interface is served directly from the TEE.
11+
Interact with your AI via a TEE-served chat interface.
1212

13-
- **Flare Blockchain Integration**
14-
Enjoy native support for token operations on the Flare blockchain.
13+
- **Flare Blockchain and Wallet Integration**
14+
Perform token operations and generate wallets from within the TEE.
1515

1616
- **Gemini 2.0 Support**
17-
Leverage Google Gemini with structured query support for advanced AI capabilities.
17+
Utilize Google Gemini’s structured query support for advanced AI functionalities.
1818

1919
<img width="500" alt="Artemis" src="https://github.com/user-attachments/assets/921fbfe2-9d52-496c-9b48-9dfc32a86208" />
2020

21-
## 🏗️ Build Instructions
21+
## 🏗️ Build & Run Instructions
2222

23-
You can build and run Flare AI DeFAI using Docker (recommended) or set up the backend and frontend manually.
23+
You can deploy Flare AI DeFAI using Docker (recommended) or set up the backend and frontend manually.
2424

25-
### Setup .env
25+
### Environment Setup
2626

27-
1. Rename `.env.example` to `env` and set all the variables.
27+
1. **Prepare the Environment File:**
28+
Rename `.env.example` to `.env` and update the variables accordingly.
29+
> **Tip:** Set `SIMULATE_ATTESTATION=true` for local testing.
2830
29-
2. Make sure `SIMULATE_ATTESTATION=true` for local testing
31+
### Build using Docker (Recommended)
3032

31-
### With Docker (Recommended)
33+
The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.
3234

33-
The Docker build is optimized for local testing and mimics the TEE environment with minimal adjustments. It includes an Nginx server for routing and uses Supervisor to manage both backend and frontend services within a single container.
34-
35-
1. **Build the Docker image:**
35+
1. **Build the Docker Image:**
3636

3737
```bash
3838
docker build -t flare-ai-defai .
3939
```
4040

41-
2. **Run the Docker container:**
41+
2. **Run the Docker Container:**
4242

4343
```bash
4444
docker run -p 80:80 -it --env-file .env flare-ai-defai
4545
```
4646

47-
3. **Open frontend in browser**
48-
49-
To open the frontend, navigate to [http://localhost:80](http://localhost:80)
47+
3. **Access the Frontend:**
48+
Open your browser and navigate to [http://localhost:80](http://localhost:80) to interact with the Chat UI.
5049

51-
### Manual Setup
50+
### Build manually
5251

53-
Flare AI DeFAI consists of a Python-based backend and a JavaScript frontend.
52+
Flare AI DeFAI is composed of a Python-based backend and a JavaScript frontend. Follow these steps for manual setup:
5453

5554
#### Backend Setup
5655

5756
1. **Install Dependencies:**
58-
Backend dependencies are managed using [uv](https://docs.astral.sh/uv/getting-started/installation/):
57+
Use [uv](https://docs.astral.sh/uv/getting-started/installation/) to install backend dependencies:
5958

6059
```bash
6160
uv sync --all-extras
6261
```
6362

6463
2. **Start the Backend:**
65-
By default, the backend is served on `0.0.0.0:8080`.
64+
The backend runs by default on `0.0.0.0:8080`:
6665

6766
```bash
6867
uv run start-backend
@@ -71,21 +70,21 @@ Flare AI DeFAI consists of a Python-based backend and a JavaScript frontend.
7170
#### Frontend Setup
7271

7372
1. **Install Dependencies:**
74-
Navigate to the `chat-ui/` directory and install the necessary packages via [npm](https://nodejs.org/en/download):
73+
In the `chat-ui/` directory, install the required packages using [npm](https://nodejs.org/en/download):
7574

7675
```bash
7776
cd chat-ui/
7877
npm install
7978
```
8079

8180
2. **Configure the Frontend:**
82-
Modify `chat-ui/src/App.js` to update the backend URL during testing:
81+
Update the backend URL in `chat-ui/src/App.js` for testing:
8382

8483
```js
8584
const BACKEND_ROUTE = "http://localhost:8080/api/routes/chat/";
8685
```
8786

88-
**Note:** Remember to revert `BACKEND_ROUTE` back to `'api/routes/chat/'` after testing.
87+
> **Note:** Remember to change `BACKEND_ROUTE` back to `'api/routes/chat/'` after testing.
8988
9089
3. **Start the Frontend:**
9190

@@ -95,60 +94,51 @@ Flare AI DeFAI consists of a Python-based backend and a JavaScript frontend.
9594

9695
## 🚀 Deploy on TEE
9796

98-
Deploy Flare AI DeFAI on Confidential Compute Instances using either AMD SEV or Intel TDX.
97+
Deploy Flare AI DeFAI on a Confidential Space Instances (using AMD SEV or Intel TDX) to benefit from enhanced hardware-backed security.
9998

100-
### 📌 Prerequisites
99+
### Prerequisites
101100

102101
- **Google Cloud Platform Account:**
103-
Ensure you have access to the `verifiable-ai-hackathon`.
102+
Access to the `verifiable-ai-hackathon` project is required.
104103

105104
- **Gemini API Key:**
106-
Link your Gemini API key to the same project.
105+
Ensure your [Gemini API key](https://aistudio.google.com/app/apikey) is linked to the project.
107106

108107
- **gcloud CLI:**
109-
Install and authenticate the [gcloud CLI](https://cloud.google.com/sdk/docs/install) on your system.
110-
111-
### ⚙️ Environment Setup
108+
Install and authenticate the [gcloud CLI](https://cloud.google.com/sdk/docs/install).
112109

113-
#### Step 1: Configure Environment Variables
110+
### Environment Configuration
114111

115-
Make sure the following variables are set in `.env`:
116-
117-
```bash
118-
TEE_IMAGE_REFERENCE=ghcr.io/flare-foundation/flare-ai-defai:main # set this your repo build image
119-
INSTANCE_NAME=<PROJECT_NAME-TEAM-_NAME
120-
```
121-
122-
#### Step 2: Apply the Configuration
123-
124-
Add the environment variables to your current shell:
125-
126-
```bash
127-
source .env
128-
```
112+
1. **Set Environment Variables:**
113+
Update your `.env` file with:
129114

130-
**Note:** If you open a new shell you will need to run this command again.
115+
```bash
116+
TEE_IMAGE_REFERENCE=ghcr.io/flare-foundation/flare-ai-defai:main # Replace with your repo build image
117+
INSTANCE_NAME=<PROJECT_NAME-TEAM_NAME>
118+
```
131119

132-
#### Step 3: Verify the Setup
120+
2. **Load Environment Variables:**
133121

134-
Confirm that the environment variable is set correctly:
122+
```bash
123+
source .env
124+
```
135125

136-
```bash
137-
echo $TEE_IMAGE_REFERENCE
138-
# Expected output: ghcr.io/flare-foundation/flare-ai-defai:main
139-
```
126+
> **Reminder:** Run the above command in every new shell session.
140127
141-
### Deploying on Confidential Space
128+
3. **Verify the Setup:**
142129

143-
Choose your deployment option based on your hardware preference.
130+
```bash
131+
echo $TEE_IMAGE_REFERENCE
132+
# Expected output: ghcr.io/flare-foundation/flare-ai-defai:main
133+
```
144134

145-
#### 🔹 Option 1: AMD SEV (Recommended)
135+
### Deploying to Confidential Space
146136

147-
Deploy on AMD Secure Encrypted Virtualization (SEV):
137+
For deployment on Confidential Space (AMD SEV):
148138

149139
```bash
150140
gcloud compute instances create $INSTANCE_NAME \
151-
--project=verifiable-ai-hackathon \
141+
--project=pacific-smile-435514-h4 \
152142
--zone=us-central1-c \
153143
--machine-type=n2d-standard-2 \
154144
--network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
@@ -160,7 +150,7 @@ tee-env-WEB3_PROVIDER_URL=$WEB3_PROVIDER_URL,\
160150
tee-env-SIMULATE_ATTESTATION=false \
161151
--maintenance-policy=MIGRATE \
162152
--provisioning-model=STANDARD \
163-
--service-account=confidential-sa@flare-network-sandbox.iam.gserviceaccount.com \
153+
--service-account=magureanuhoria@pacific-smile-435514-h4.iam.gserviceaccount.com \
164154
--scopes=https://www.googleapis.com/auth/cloud-platform \
165155
--min-cpu-platform="AMD Milan" \
166156
--tags=flare-ai,http-server,https-server \
@@ -174,74 +164,48 @@ type=pd-standard \
174164
--shielded-secure-boot \
175165
--shielded-vtpm \
176166
--shielded-integrity-monitoring \
177-
--labels=goog-ec-src=vm_add-gcloud \
178167
--reservation-affinity=any \
179168
--confidential-compute-type=SEV
180169
```
181170

182-
#### Option 2: Intel TDX
171+
#### Post-deployment
183172

184-
Deploy on Intel Trust Domain Extensions (TDX):
173+
After deployment, you should see an output similar to:
185174

186-
```bash
187-
gcloud compute instances create $INSTANCE_NAME \
188-
--project=verifiable-ai-hackathon \
189-
--machine-type=c3-standard-4 \
190-
--maintenance-policy=TERMINATE \
191-
--zone=us-central1-c \
192-
--network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
193-
--metadata=tee-image-reference=$TEE_IMAGE_REFERENCE,\
194-
tee-container-log-redirect=true,\
195-
tee-env-GEMINI_API_KEY=$GEMINI_API_KEY,\
196-
tee-env-GEMINI_MODEL=gemini-1.5-flash,\
197-
tee-env-WEB3_PROVIDER_URL=$WEB3_PROVIDER_URL,\
198-
tee-env-SIMULATE_ATTESTATION=false \
199-
--provisioning-model=STANDARD \
200-
--service-account=confidential-sa@flare-network-sandbox.iam.gserviceaccount.com \
201-
--scopes=https://www.googleapis.com/auth/cloud-platform \
202-
--tags=flare-ai,http-server,https-server \
203-
--create-disk=auto-delete=yes,\
204-
boot=yes,\
205-
device-name=$INSTANCE_NAME,\
206-
image=projects/confidential-space-images/global/images/confidential-space-debug-0-tdxpreview-c38b622,\
207-
mode=rw,\
208-
size=11,\
209-
type=pd-balanced \
210-
--shielded-secure-boot \
211-
--shielded-vtpm \
212-
--shielded-integrity-monitoring \
213-
--confidential-compute-type=TDX
175+
```plaintext
176+
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
177+
defai-team1 us-central1-c n2d-standard-2 10.128.0.18 34.41.127.200 RUNNING
214178
```
215179

216-
### Post-deployment
180+
It may take a few minutes for Confidential Space to complete startup checks. You can monitor progress via the GCP Console by clicking **Serial port 1 (console)**. When you see a message like:
217181

218-
Once the instance is deploying and started you should be able to access it
219-
at the IP address of instance, you can find the IP address by going to the GCP
220-
Console and finding your instance.
182+
```plaintext
183+
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
184+
```
221185

222-
## 🔜 Next Steps
186+
the container is ready. Navigate to the external IP to access the Chat UI.
223187

224-
Once your instance is running, access the Chat UI via the instance's public IP address.
188+
## 🔜 Next Steps
225189

226-
### Example Interactions
190+
Once your instance is running, access the Chat UI using its public IP address. Here are some example interactions to try:
227191

228192
- **"Create an account for me"**
229-
- **"Show me your remote attestation"**
230193
- **"Transfer 10 C2FLR to 0x000000000000000000000000000000000000dEaD"**
194+
- **"Show me your remote attestation"**
231195

232196
## Future Upgrades
233197

234198
- **TLS Communication:**
235-
Encrypt communications with a RA-TLS scheme for enhanced security.
199+
Implement RA-TLS for encrypted communication.
236200

237201
- **Expanded Flare Ecosystem Support:**
238-
- Token swaps through [SparkDEX](http://sparkdex.ai)
239-
- Borrow-lend via [Kinetic](https://linktr.ee/kinetic.market)
240-
- Trading strategies with [RainDEX](https://www.rainlang.xyz)
202+
- **Token Swaps:** via [SparkDEX](http://sparkdex.ai)
203+
- **Borrow-Lend:** via [Kinetic](https://linktr.ee/kinetic.market)
204+
- **Trading Strategies:** via [RainDEX](https://www.rainlang.xyz)
241205

242206
## 🔧 Troubleshooting
243207

244-
If you encounter issues, try the following steps:
208+
If you encounter issues, follow these steps:
245209

246210
1. **Check Logs:**
247211

@@ -250,7 +214,7 @@ If you encounter issues, try the following steps:
250214
```
251215

252216
2. **Verify API Key:**
253-
Ensure the `GEMINI_API_KEY` environment variable is correctly set.
217+
Ensure that the `GEMINI_API_KEY` environment variable is set correctly.
254218

255219
3. **Check Firewall Settings:**
256-
Confirm that your instance is accessible publicly on port `80`.
220+
Confirm that your instance is publicly accessible on port `80`.

0 commit comments

Comments
 (0)