Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gemini (new) #400

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
128 changes: 128 additions & 0 deletions src/appmixer/ai/gemini/AIAgent/AIAgent.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
'use strict';

const { GoogleGenerativeAI } = require('@google/generative-ai');

const lib = require('../lib');

const COLLECT_TOOL_OUTPUTS_POLL_TIMEOUT = 60 * 1000; // 60 seconds
const COLLECT_TOOL_OUTPUTS_POLL_INTERVAL = 1 * 1000; // 1 second

module.exports = {

start: async function(context) {

try {
const tools = lib.getConnectedToolStartComponents(context.componentId, context.flowDescriptor);
const functionDeclarations = lib.getFunctionDeclarations(tools);
return context.stateSet('functionDeclarations', functionDeclarations);
} catch (error) {
throw new context.CancelError(error);
}
},

receive: async function(context) {

const { prompt, model, instructions } = context.messages.in.content;
const threadId = context.messages.in.content.threadId;
const correlationId = context.messages.in.correlationId;

const genAI = new GoogleGenerativeAI(context.auth.apiKey);
const params = {
model,
systemInstruction: instructions || 'You are a helpful assistant. If you detect you cannot use any tool, always reply directly as if no tools were given to you.'
};
const functionDeclarations = await context.stateGet('functionDeclarations');
if (functionDeclarations && functionDeclarations.length) {
params.tools = { functionDeclarations };
params.functionCallingConfig = {
mode: 'AUTO' // Options: 'AUTO', 'ANY', 'NONE'
};
}

const client = genAI.getGenerativeModel(params);

const messages = threadId ? await context.stateGet(`history:${threadId}`) || [] : [];
messages.push({ role: 'user', parts: [{ text: prompt }] });
if (threadId) {
await context.stateSet(`history:${threadId}`, messages);
}

while (true) {

await context.log({ step: 'turn', messages });

const result = await client.generateContent({ contents: messages });

let functionCalls = result.response.functionCalls();
if (functionCalls && functionCalls.length) {

messages.push({ role: 'model', parts: functionCalls.map(call => ({ functionCall: call })) });

await context.log({ step: 'function-calls', message: `AI requested ${functionCalls.length} function(s) in parallel`, functionCalls });

const calls = [];
for (const call of functionCalls) {
const componentId = call.name.split('_')[1];
const callId = `${call.name}:${correlationId}`;
calls.push({ componentId, args: call.args, id: callId, name: call.name });
}
Comment on lines +64 to +68
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify behavior when splitting call.name by an underscore.
If call.name doesn’t contain _, split('_')[1] will be undefined. Consider verifying call.name before splitting to avoid runtime issues.

Could you run a quick usage search for all possible values of call.name? For example:

Check if the code ensures an underscore is always present. If not, add a fallback or error handling.


🏁 Script executed:

#!/bin/bash
rg -A 3 'call\.name'

Length of output: 999


Action Required: Enhance Robustness for call.name Splitting

The current implementation directly splits call.name using an underscore (call.name.split('_')[1]), which can result in an undefined componentId if the underscore is absent. The usage search confirms that no additional checks or fallbacks are in place around this pattern.

  • Issue: If call.name does not include an underscore, split('_')[1] will be undefined, potentially leading to runtime errors.
  • Recommendation: Add a safeguard to verify that call.name contains an underscore. For instance, refactor the logic as follows:
    const parts = call.name.split('_');
    const componentId = parts.length > 1 ? parts[1] : /* appropriate fallback or error handling */;
  • Action: Please review and implement the necessary check or fallback to avoid issues when the expected delimiter is missing.


// Send to all tools. Each ai.ToolStart ignores tool calls that are not intended for it.
await context.sendJson({ toolCalls: calls, prompt }, 'tools');

// Output of each tool is expected to be stored in the service state
// under the ID of the tool call. This is done in the ToolStartOutput component.
// Collect outputs of all the required tool calls.
await context.log({ step: 'collect-tools-output', threadId });
const outputs = [];
const pollStart = Date.now();
while (
(outputs.length < calls.length) &&
(Date.now() - pollStart < COLLECT_TOOL_OUTPUTS_POLL_TIMEOUT)
) {
for (const call of calls) {
const result = await context.flow.stateGet(call.id);
if (result) {
outputs.push({ name: call.name, output: result.output });
await context.flow.stateUnset(call.id);
}
}
// Sleep.
await new Promise((resolve) => setTimeout(resolve, COLLECT_TOOL_OUTPUTS_POLL_INTERVAL));
}
await context.log({ step: 'collected-tools-output', threadId, outputs });

// Submit tool outputs to the assistant.
if (outputs && outputs.length) {
await context.log({ step: 'tool-outputs', tools: calls, outputs });
// Send all function results back to the AI.
messages.push(
...outputs.map(({ name, output }) => ({
role: 'user',
parts: [{ functionResponse: {
name,
response: {
name,
content: output
}
} }]
}))
);

} else {
await context.log({ step: 'no-tool-outputs', tools: toolCalls });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix the undeclared toolCalls reference.
The variable toolCalls is not defined in this scope—only calls is. This can cause a runtime error.

Apply this diff to reference the correct variable:

- await context.log({ step: 'no-tool-outputs', tools: toolCalls });
+ await context.log({ step: 'no-tool-outputs', tools: calls });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await context.log({ step: 'no-tool-outputs', tools: toolCalls });
await context.log({ step: 'no-tool-outputs', tools: calls });

}
} else {
// Final answer, no more function calls.

const answer = result.response.text();
messages.push({ role: 'model', parts: [{ text: answer }] });

if (threadId) {
await context.stateSet(`history:${threadId}`, messages);
}
return context.sendJson({ answer, prompt }, 'out');
}
}
}
};
75 changes: 75 additions & 0 deletions src/appmixer/ai/gemini/AIAgent/component.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
{
"name": "appmixer.ai.gemini.AIAgent",
"author": "Appmixer <[email protected]>",
"description": "Build an AI agent responding with contextual answers or performing contextual actions.",
"auth": {
"service": "appmixer:ai:gemini"
},
"inPorts": [{
"name": "in",
"schema": {
"type": "object",
"properties": {
"model": { "type": "string" },
"instructions": { "type": "string", "maxLength": 256000 },
"prompt": { "type": "string" },
"threadId": { "type": "string" }
},
"required": ["prompt"]
},
"inspector": {
"inputs": {
"model": {
"type": "text",
"index": 1,
"label": "Model",
"tooltip": "ID of the model to use.",
"defaultValue": "gemini-2.0-flash",
"source": {
"url": "/component/appmixer/ai/gemini/ListModels?outPort=out",
"data": {
"transform": "./ListModels#toSelectOptions"
}
}
},
"instructions": {
"type": "textarea",
"label": "Instructions",
"index": 2,
"tooltip": "The system instructions that the assistant uses. The maximum length is 256,000 characters. For example 'You are a personal math tutor.'."
},
"prompt": {
"label": "Prompt",
"type": "textarea",
"index": 3
},
"threadId": {
"label": "Thread ID",
"type": "text",
"index": 4,
"tooltip": "By setting a thread ID you can keep the context of the conversation."
}
}
}
}],
"outPorts": [{
"name": "out",
"options": [{
"label": "Answer",
"value": "answer",
"schema": { "type": "string" }
}, {
"label": "Prompt",
"value": "prompt",
"schema": { "type": "string" }
}]
}, {
"name": "tools",
"options": [{
"label": "Prompt",
"value": "prompt",
"schema": { "type": "string" }
}]
}],
"icon": "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIHZpZXdCb3g9IjAgMCAyNCAyNCIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPGcgY2xpcC1wYXRoPSJ1cmwoI2NsaXAwXzUyMjFfMzI0OTApIj4KPHBhdGggZD0iTTEyLjE1OTcgNi44Nzk5VjIuMzk5OUg3LjY3OTY5IiBzdHJva2U9IiMzMDMyMzYiIHN0cm9rZS13aWR0aD0iMi4wMTYiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIgc3Ryb2tlLWxpbmVqb2luPSJyb3VuZCIvPgo8cGF0aCBkPSJNMTcuOTIxMiA2Ljg3OTY0SDUuNDQxMTdDNC4yMDQwNSA2Ljg3OTY0IDMuMjAxMTcgNy44ODI1MiAzLjIwMTE3IDkuMTE5NjRWMTguMDc5NkMzLjIwMTE3IDE5LjMxNjggNC4yMDQwNSAyMC4zMTk2IDUuNDQxMTcgMjAuMzE5NkgxNy45MjEyQzE5LjE1ODMgMjAuMzE5NiAyMC4xNjEyIDE5LjMxNjggMjAuMTYxMiAxOC4wNzk2VjkuMTE5NjRDMjAuMTYxMiA3Ljg4MjUyIDE5LjE1ODMgNi44Nzk2NCAxNy45MjEyIDYuODc5NjRaIiBzdHJva2U9IiMzMDMyMzYiIHN0cm9rZS13aWR0aD0iMi4wMTYiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIgc3Ryb2tlLWxpbmVqb2luPSJyb3VuZCIvPgo8cGF0aCBkPSJNMC45NTk5NjEgMTMuNTk5NkgzLjE5OTk2IiBzdHJva2U9IiMzMDMyMzYiIHN0cm9rZS13aWR0aD0iMi4wMTYiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIgc3Ryb2tlLWxpbmVqb2luPSJyb3VuZCIvPgo8cGF0aCBkPSJNMjEuMTIwMSAxMy41OTk2SDIyLjQwMDEiIHN0cm9rZT0iIzMwMzIzNiIgc3Ryb2tlLXdpZHRoPSIyLjAxNiIgc3Ryb2tlLWxpbmVjYXA9InJvdW5kIiBzdHJva2UtbGluZWpvaW49InJvdW5kIi8+CjxwYXRoIGQ9Ik0xNC43ODQyIDEyLjQ3OTdWMTQuNzE5NyIgc3Ryb2tlPSIjMzAzMjM2IiBzdHJva2Utd2lkdGg9IjIuMDE2IiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiLz4KPHBhdGggZD0iTTguNjM5NjUgMTIuNDc5N1YxNC43MTk3IiBzdHJva2U9IiMzMDMyMzYiIHN0cm9rZS13aWR0aD0iMi4wMTYiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIgc3Ryb2tlLWxpbmVqb2luPSJyb3VuZCIvPgo8L2c+CjxkZWZzPgo8Y2xpcFBhdGggaWQ9ImNsaXAwXzUyMjFfMzI0OTAiPgo8cmVjdCB3aWR0aD0iMjQiIGhlaWdodD0iMjMuMDQiIGZpbGw9IndoaXRlIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgwIDAuNDc5OTgpIi8+CjwvY2xpcFBhdGg+CjwvZGVmcz4KPC9zdmc+Cg=="
}
15 changes: 15 additions & 0 deletions src/appmixer/ai/gemini/AIAgent/icon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 17 additions & 0 deletions src/appmixer/ai/gemini/GenerateEmbeddings/GenerateEmbeddings.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
'use strict';

const lib = require('../lib');

module.exports = {

receive: async function(context) {

const config = {
apiKey: context.auth.apiKey,
baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/'
};

const out = await lib.generateEmbeddings(context, config, context.messages.in.content);
return context.sendJson(out, 'out');
}
Comment on lines +7 to +16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for API interactions.

The function doesn't handle errors that might occur when calling the API. This could lead to unhandled promise rejections.

 receive: async function(context) {

     const config = {
         apiKey: context.auth.apiKey,
         baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/'
     };

-    const out = await lib.generateEmbeddings(context, config, context.messages.in.content);
-    return context.sendJson(out, 'out');
+    try {
+        const out = await lib.generateEmbeddings(context, config, context.messages.in.content);
+        return context.sendJson(out, 'out');
+    } catch (error) {
+        context.logger.error('Error generating embeddings:', error);
+        throw new Error(`Failed to generate embeddings: ${error.message}`);
+    }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
receive: async function(context) {
const config = {
apiKey: context.auth.apiKey,
baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/'
};
const out = await lib.generateEmbeddings(context, config, context.messages.in.content);
return context.sendJson(out, 'out');
}
receive: async function(context) {
const config = {
apiKey: context.auth.apiKey,
baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/'
};
try {
const out = await lib.generateEmbeddings(context, config, context.messages.in.content);
return context.sendJson(out, 'out');
} catch (error) {
context.logger.error('Error generating embeddings:', error);
throw new Error(`Failed to generate embeddings: ${error.message}`);
}
}

};
89 changes: 89 additions & 0 deletions src/appmixer/ai/gemini/GenerateEmbeddings/component.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
{
"name": "appmixer.ai.gemini.GenerateEmbeddings",
"author": "Appmixer <[email protected]>",
"description": "Generate embeddings for text data. The text is split into chunks and embedding is returned for each chunk. <br/>The returned embeddings is an array of the form: <code>[{ \"index\": 0, \"text\": \"chunk1\", \"vector\": [1.1, 1.2, 1.3] }]</code>.<br/>TIP: use the <b>JSONata modifier</b> to convert the embeddings array into custom formats. For convenience, the component also returns the first vector in the embeddings array which is useful when querying vector databases to find relevant chunks.",
"auth": {
"service": "appmixer:ai:gemini"
},
"inPorts": [{
"name": "in",
"schema": {
"type": "object",
"properties": {
"text": { "type": "string", "maxLength": 512000 },
"model": { "type": "string" },
"chunkSize": { "type": "integer" },
"chunkOverlap": { "type": "integer" }
}
},
"inspector": {
"inputs": {
"text": {
"type": "textarea",
"label": "Text",
"tooltip": "Enter the text to generate embeddings for. The text will be split into chunks and embeddings will be generated for each chunk. The maximum length is 512,000 characters. If you need more than 512,000 characters, use the 'Generate Embeddings From File' component.",
"index": 1
},
"model": {
"type": "text",
"index": 2,
"label": "Model",
"tooltip": "ID of the model to use.",
"defaultValue": "text-embedding-004",
"source": {
"url": "/component/appmixer/ai/gemini/ListModels?outPort=out",
"data": {
"transform": "./ListModels#toSelectOptions"
}
}
},
"chunkSize": {
"type": "number",
"label": "Chunk Size",
"defaultValue": 500,
"tooltip": "Maximum size of each chunk for text splitting. The default is 500.",
"index": 3
},
"chunkOverlap": {
"type": "number",
"label": "Chunk Overlap",
"defaultValue": 50,
"tooltip": "Overlap between chunks for text splitting to maintain context. The default is 50.",
"index": 4
}
}
}
}],
"outPorts": [{
"name": "out",
"options": [{
"label": "Embeddings",
"value": "embeddings",
"schema": {
"type": "array",
"items": {
"type": "object",
"properties": {
"index": { "type": "string" },
"vector": { "type": "array", "items": { "type": "number" } },
"text": { "type": "string" }
}
},
"examples": [
[{ "index": 0, "text": "chunk1", "vector": [1.1, 1.2, 1.3] }, { "index": 1, "text": "chunk2", "vector": [2.1, 2.2, 2.3] }]
]
}
}, {
"label": "First Vector",
"value": "firstVector",
"schema": {
"type": "array",
"items": { "type": "number" },
"examples": [
[-0.0120379254, -0.0376950279, -0.0133513855, -0.0365983546, -0.0247007012, 0.0158507861, -0.0143460445, 0.00486809108]
]
}
}]
}],
"icon": "data:image/svg+xml;base64,PHN2ZyBmaWxsPSJub25lIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxNiAxNiI+PHBhdGggZD0iTTE2IDguMDE2QTguNTIyIDguNTIyIDAgMDA4LjAxNiAxNmgtLjAzMkE4LjUyMSA4LjUyMSAwIDAwMCA4LjAxNnYtLjAzMkE4LjUyMSA4LjUyMSAwIDAwNy45ODQgMGguMDMyQTguNTIyIDguNTIyIDAgMDAxNiA3Ljk4NHYuMDMyeiIgZmlsbD0idXJsKCNwcmVmaXhfX3BhaW50MF9yYWRpYWxfOTgwXzIwMTQ3KSIvPjxkZWZzPjxyYWRpYWxHcmFkaWVudCBpZD0icHJlZml4X19wYWludDBfcmFkaWFsXzk4MF8yMDE0NyIgY3g9IjAiIGN5PSIwIiByPSIxIiBncmFkaWVudFVuaXRzPSJ1c2VyU3BhY2VPblVzZSIgZ3JhZGllbnRUcmFuc2Zvcm09Im1hdHJpeCgxNi4xMzI2IDUuNDU1MyAtNDMuNzAwNDUgMTI5LjIzMjIgMS41ODggNi41MDMpIj48c3RvcCBvZmZzZXQ9Ii4wNjciIHN0b3AtY29sb3I9IiM5MTY4QzAiLz48c3RvcCBvZmZzZXQ9Ii4zNDMiIHN0b3AtY29sb3I9IiM1Njg0RDEiLz48c3RvcCBvZmZzZXQ9Ii42NzIiIHN0b3AtY29sb3I9IiMxQkExRTMiLz48L3JhZGlhbEdyYWRpZW50PjwvZGVmcz48L3N2Zz4="
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
'use strict';

const lib = require('../lib');

module.exports = {

receive: async function(context) {

const config = {
apiKey: context.auth.apiKey,
baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/'
};

await lib.generateEmbeddingsFromFile(context, config, context.messages.in.content, (out) => {
return context.sendJson(out, 'out');
});
Comment on lines +14 to +16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor callback usage to align with async/await pattern.

This component uses a callback pattern while the other Gemini components use async/await. Consider refactoring for consistent error handling and flow control.

-    await lib.generateEmbeddingsFromFile(context, config, context.messages.in.content, (out) => {
-        return context.sendJson(out, 'out');
-    });
+    try {
+        const out = await lib.generateEmbeddingsFromFile(context, config, context.messages.in.content);
+        return context.sendJson(out, 'out');
+    } catch (error) {
+        context.logger.error('Error generating embeddings from file:', error);
+        throw new Error(`Failed to generate embeddings from file: ${error.message}`);
+    }

This assumes lib.generateEmbeddingsFromFile can be modified to return a Promise instead of accepting a callback.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await lib.generateEmbeddingsFromFile(context, config, context.messages.in.content, (out) => {
return context.sendJson(out, 'out');
});
try {
const out = await lib.generateEmbeddingsFromFile(context, config, context.messages.in.content);
return context.sendJson(out, 'out');
} catch (error) {
context.logger.error('Error generating embeddings from file:', error);
throw new Error(`Failed to generate embeddings from file: ${error.message}`);
}

}
};
Loading