The "AI Network Blockchain Router for Model API" is an intermediary between the AI Network Blockchain and the Model API. It processes requests and routes events originating from the AI Network blockchain, and forwards them to the models.
node >= 18
Clone this repository.
git clone [email protected]:ainize-team/ainetwork-blockchain-router-for-model-api.git
BLOCKCHAIN_NETWORK= // AI Network network. mainnet = '1', testnet = '0'.
PRIVATE_KEY= // AI Network private key to send Transactions for your AI Service.
PORT= // Port number to run this server. (optional, default: 3000)
A trigger function in AI Network automatically sends POST requests to a specified URL whenever a specific value in the blockchain database changes. For more details, watch this! 👉 What is AI Network Trigger?
Requests from trigger functions include complex data structures.
{
fid: 'function-id',
function: {
function_type: 'REST',
function_url: 'https://function_url.ainetwork.ai/',
function_id: 'function-id'
},
valuePath: [
'apps',
'app_name',
'sub_path',
'0xaddress...', // Path variable matched with functionPath.
...
],
functionPath: [
'apps',
'app_name',
'sub_path',
'$address', // Path variable name. Start with '$'
...
],
value: <ANY_DATA_TO_WRITE_ON_BLOCKCHAIN>,
...
params: {
address: '0xaddress...' // Path variable.
},
...
transaction: {
tx_body: { ... },
signature: '0xsignature...',
...
},
...
}
The AI Network Blockchain Router for Model API simplifies handling these requests with built-in utilities.
Use the middleware blockchainTriggerFilter
to verify requests originate from a trigger function.
import Middleware from './middlewares/middleware';
const middleware = new Middleware();
app.post(
...
middleware.blockchainTriggerFilter,
...
)
Easily extract key datas using helper functions.
import { extractDataFromModelRequest } from './utils/extractor';
const {
appName,
requesterAddress,
requestData,
requestKey
} = extractDataFromModelRequest(req);
To integrate your AI service, modify src/inference.ts
. This file allows you to process the incoming data, format it appropriately, and send requests to your inference service.
import { Request } from 'express'
export const inference = async (req: Request): Promise<any> =>{
const {
appName,
requesterAddress,
requestData,
requestKey
} = extractDataFromModelRequest(req);
////// Insert your AI Service's Inference Code. //////
// return the result
}
This is a simple example provided to help you understand how to connect an AI service. Modify it according to your specific requirements.
export const inference = async (req: Request): Promise<any> =>{
const {
appName,
requesterAddress,
requestData,
requestKey
} = extractDataFromModelRequest(req);
////// Insert your AI Service's Inference Code. //////
const inferenceUrl = process.env.INFERENCE_URL as string; // https://llama_vision.ainize.xyz/chat/completions (sample url. not working)
const modelName = process.env.MODEL_NAME as string; // meta-llama/Llama-3.2-11B-Vision-Instruct
const apiKey = process.env.API_KEY as string;
const prompt = requestData.prompt;
const response = await fetch(
inferenceUrl,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({
model: modelName,
messages: [
{
role: 'user',
content: prompt
}
]
})
}
)
if (!response.ok) {
throw new Error(`Fail to inference: ${JSON.stringify(await response.json())}`);
}
const data = await response.json();
// return the result
return data.choices[0].message.content;
}