This document explains how to use the fake streaming method to debug streaming response issues without making actual API calls to Gemini.
The fake streaming method (streamGeminiFake) simulates the behavior of the real Gemini streaming API, allowing you to:
- Debug streaming response handling without API costs
- Test different streaming scenarios
- Simulate network delays and chunk timing
- Test error conditions and edge cases
- Verify the frontend streaming implementation
import aiService from './services/aiService.js';
// Use the fake streaming method directly
const stream = await aiService.streamGeminiFake(
"Your test message",
[], // conversation history
"gemini-1.5-flash", // model
"test-session", // session ID
[], // files
false // use MCP tools
);
// Process the stream
for await (const chunk of stream) {
console.log('Chunk:', chunk.text);
}Send a POST request to /api/chat/stream with useFakeStream: true:
const response = await fetch('http://localhost:3001/api/chat/stream', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
message: "Test message",
model: "gemini-1.5-flash",
useFakeStream: true
})
});cd backend
node test-fake-stream.jscd backend
node test-fake-stream-api.js- Splits response into 2-4 word chunks
- Adds random delays (50-150ms) between chunks
- Maintains the same data structure as real Gemini responses
- Logs each chunk as it's processed
- Shows chunk index and total count
- Displays timing information
- Includes the original message in the response
- Randomly simulates tool calls (30% chance when MCP tools enabled)
- Includes realistic tool call structure
- Helps test tool call handling in the frontend
- Can be extended to simulate various error conditions
- Network timeouts
- Malformed responses
- Rate limiting
The fake stream method accepts the same parameters as the real streamGemini method:
message: The user's messageconversationHistory: Previous conversation contextmodel: AI model name (for logging purposes)sessionId: Session identifierfiles: File attachments (simulated)useMCPTools: Whether to simulate tool calls
Each chunk follows the same format as real Gemini responses:
{
text: "chunk content",
chunkIndex: 0,
totalChunks: 15,
timestamp: "2024-01-15T10:30:00.000Z"
}Tool call chunks include additional properties:
{
text: "",
toolCall: {
name: "fake_tool",
parameters: { message: "This is a simulated tool call" }
},
timestamp: "2024-01-15T10:30:00.000Z"
}- Check if chunks are being received in the correct order
- Verify chunk content is being concatenated properly
- Look for missing or duplicate chunks
- Test with different delay configurations
- Verify the frontend handles variable chunk timing
- Check for race conditions in chunk processing
- Test how the frontend handles malformed chunks
- Verify error recovery mechanisms
- Check timeout handling
- Test tool call detection and processing
- Verify tool call parameters are handled correctly
- Check tool call timing in the stream
You can modify the fake stream behavior by editing the streamGeminiFake method:
// Create chunks of 1-2 words each (smaller chunks)
for (let i = 0; i < words.length; i += 2) {
const chunkWords = words.slice(i, i + 2);
chunks.push(chunkWords.join(' ') + ' ');
}// Faster streaming (10-50ms delays)
const delay = Math.random() * 40 + 10;
// Slower streaming (200-500ms delays)
const delay = Math.random() * 300 + 200;// Simulate random errors
if (Math.random() < 0.1) { // 10% chance of error
throw new Error('Simulated network error');
}The fake stream works seamlessly with the existing frontend code. No changes are needed to the Chrome extension or React components - they will process the fake stream exactly like a real Gemini stream.
- Use for Development: Always use fake streams during development to avoid API costs
- Test Edge Cases: Use fake streams to test error conditions and edge cases
- Performance Testing: Adjust delays to test frontend performance with different streaming speeds
- Debug Logging: Enable detailed logging to understand streaming behavior
- Clean Up: Remove or disable fake stream calls before production deployment
❌ Server is not running. Please start the server first:
cd backend && npm start
Make sure you're running the test scripts from the backend directory:
cd backend
node test-fake-stream.jsIf the default port (3001) is in use, update the API_BASE_URL in the test scripts.
backend/services/aiService.js- Main fake stream implementationbackend/routes/chat.js- API endpoint integrationbackend/test-fake-stream.js- Direct method testbackend/test-fake-stream-api.js- API endpoint test