Skip to content

Commit 020deba

Browse files
committed
Doc update and lint fix
1 parent ace1204 commit 020deba

1 file changed

Lines changed: 126 additions & 41 deletions

File tree

docs/modules/ROOT/pages/Components/Chatbot.adoc

Lines changed: 126 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -220,64 +220,149 @@ const predefinedSessions = [
220220

221221
=== Backend Integration
222222

223-
Here is an example if you already have a backend application taking care of generating the chatbot's responses and you want to integrate it with this `Chatbot` component:
223+
Here is a complete example of how to set up a local running demo with backend integration for the `Chatbot` component:
224224

225-
First, we will set a new state `gettingResponse` that will indicate us if we are currently fetching a response from the backend:
225+
==== Step 1: Configure Vite Proxy
226226

227-
[source, tsx]
227+
First, update your `vite.config.ts` to add a proxy configuration for your backend API:
228+
229+
[source, typescript]
228230
----
229-
const [gettingResponse, setGettingResponse] = useState(false);
230-
----
231+
// vite.config.ts
232+
import { defineConfig } from 'vite'
233+
234+
export default defineConfig({
235+
// ... other config
236+
server: {
237+
proxy: {
238+
'/ask': {
239+
target: 'http://localhost:8001',
240+
changeOrigin: true,
241+
secure: false,
242+
},
243+
},
244+
},
245+
})
246+
----
247+
248+
This configuration will proxy any requests to `/ask` to your backend server running on `http://localhost:8001`.
231249

232-
Then, we will define a new function `fetchResponseFromAPI` that will be responsible for fetching the chatbot's response from the backend based on the user's message:
250+
==== Step 2: Update handleSubmit Function
251+
252+
Replace the existing `handleSubmit` function in your `Chatbot` component with this implementation that calls your backend API:
233253

234254
[source, tsx]
235255
----
236-
const fetchResponseFromAPI = async () => {
237-
setGettingResponse(true);
238-
const requestBody = {
239-
message: inputMessage,
240-
sessionId: currentSession?.id, // Include session context
256+
const handleSubmit = async (e: { preventDefault: () => void }) => {
257+
e.preventDefault();
258+
if (!inputMessage.trim() || !currentSession) {
259+
return;
260+
}
261+
262+
const date = new Date();
263+
const datetime = `${date.toLocaleDateString()} ${date.toLocaleTimeString()}`;
264+
const userMessage: ChatMessage = {
265+
id: Date.now(),
266+
user: 'user',
267+
message: inputMessage,
268+
datetime: datetime
269+
};
270+
271+
addMessageToCurrentSession(userMessage);
272+
setInputMessage('');
273+
274+
setIsLoading(true);
275+
276+
try {
277+
// Call your backend API through the Vite proxy
278+
const response = await fetch('/ask', {
279+
method: 'POST',
280+
headers: {
281+
'Content-Type': 'application/json',
282+
},
283+
body: JSON.stringify({
284+
question: inputMessage,
285+
session_id: currentSession?.id,
286+
}),
287+
});
288+
289+
if (!response.ok) {
290+
throw new Error(`HTTP error! status: ${response.status}`);
291+
}
292+
293+
const data = await response.json();
294+
295+
const chatbotReply = {
296+
response: data.response, // Your API should return { response: string, src: string[] }
297+
src: data.src || [], // Sources array from your API
241298
};
242299
243-
try {
244-
const response = await fetch(`<URI_TO_YOUR_BACKEND_API>`, {
245-
method: 'POST',
246-
headers: {
247-
'accept': 'application/json',
248-
'Content-Type': 'application/json',
249-
},
250-
body: JSON.stringify(requestBody),
251-
});
252-
const data = await response.json();
253-
setGettingResponse(false);
254-
return {
255-
response: data.content,
256-
src: data.sources || [], // Include source references
257-
};
258-
} catch (error) {
259-
console.error("API call failed:", error);
260-
return {
261-
response: "Sorry, something went wrong.",
262-
src: [],
263-
};
264-
} finally {
265-
setGettingResponse(false);
266-
}
267-
};
300+
setIsLoading(false);
301+
simulateTypingEffect(chatbotReply);
302+
303+
} catch (error) {
304+
console.error("API call failed:", error);
305+
306+
// Fallback response in case of error
307+
const errorReply = {
308+
response: 'Sorry, I encountered an error while processing your request. Please try again.',
309+
src: [],
310+
};
311+
312+
setIsLoading(false);
313+
simulateTypingEffect(errorReply);
314+
}
315+
};
268316
----
269317

270-
WARNING: Ideally you will want to consider using a framework to manage the states, caching and hooks like `tanstack/react-query` for example as well as adding an authentication and authorization to your backend API calls
318+
==== Step 3: Enhanced Source Visualization (Optional)
271319

272-
Then all we need to do is to call this function when the user submits a message, retrieve the response, and simulate the typing effect:
273-
In our `handleSubmit` function:
320+
If you want to integrate real Neo4j data visualization for the sources, you can update your `RetrievalInformation.tsx` component:
274321

275322
[source, tsx]
276323
----
277-
const chatbotReply = await fetchResponseFromAPI();
278-
simulateTypingEffect(chatbotReply);
324+
// RetrievalInformation.tsx
325+
import { runRAGQuery, setDriver } from '../utils/Driver';
326+
327+
// In your retrieveSources() function:
328+
const retrieveSources = () => {
329+
// Configure your Neo4j connection
330+
setDriver('neo4j+s://your-database-url', 'username', 'password');
331+
332+
// Query Neo4j for actual source data
333+
runRAGQuery(props.sources).then((nvlGraph) => {
334+
setNodes(nvlGraph.nodes);
335+
setRels(nvlGraph.relationships);
336+
});
337+
};
279338
----
280339

340+
This replaces the mock data with actual Neo4j query results for source visualization.
341+
342+
==== Backend API Requirements
343+
344+
Your backend API endpoint (`/ask`) should accept a POST request with this structure:
345+
346+
[source, json]
347+
----
348+
{
349+
"question": "User's question text",
350+
"session_id": "optional-session-identifier"
351+
}
352+
----
353+
354+
And return a response in this format:
355+
356+
[source, json]
357+
----
358+
{
359+
"response": "Chatbot's response text",
360+
"src": ["source1", "source2", "source3"]
361+
}
362+
----
363+
364+
WARNING: For production use, consider implementing proper authentication and authorization for your backend API calls, as well as using state management libraries like `@tanstack/react-query` for better caching and error handling.
365+
281366
=== Session-Aware Backend Integration
282367

283368
For more advanced use cases, you can send session context to your backend to maintain conversation history and provide more contextual responses:

0 commit comments

Comments
 (0)