Replies: 3 comments 3 replies
-
I'm finding I'm running into the same issue here. The streamText has since been changed to no longer require an await which breaks the code as the stream now is closed before it can finish. I have a separate discussion going on for it as well: #4471, I hope this gets fixed soon. |
Beta Was this translation helpful? Give feedback.
-
Additional details. Updated server with a few extra awaits sprinkled in. Yes, I am just throwing things at it and see what sticks. at this point.
Response
|
Beta Was this translation helpful? Give feedback.
-
I spent a lot of time trying to resolve this problem. I updated the libraries and a bunch of other things. It turns out there was nothing wrong in the first place. The problem was AWS API Gateway. Apparently it doesn't support streaming. I switched streamText to generateText and it worked fine. I knew this before, but the data wasn't loading in the web base chat app I built so I thought there was a problem with the structure of the message. Once I found out it was an API gateway issue I made a quick fix to my ui and I got data. Woo Hoo. The problem with that is it comes back as a string and requires a bit of processing to JSONify the result, grab the response, and display it. The result has newlines embedded so you have to deal with that. I just split on new line characters then joined the data back together with a p tag or whatever you want. The trick is all the messages are coming back as an array. I assume messages are appended and the last one will always be the most recent response preceded buy the user question all the way to the begining of the messages array. Anyways, hopefully this will help someone else. Avoid the API Gateway if you want to stream. Tom |
Beta Was this translation helpful? Give feedback.
-
Hi All,
I am new at this and have been pulling my hair out. I generated some code via v0.dev which gave me a simple template using vercel ai. I got most of everything working except the response from my LLM isn't showing on the screen. I have tweaked things and then paired down to as close to the reference implementation found on the Chatbot doc page at https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot.
When I use streamText I get an empty json object {}. and when I logged it from the server, it looks like I got a promise. When I use generateText I get the data that I would expect as a json, but not sure if that is what useChat expects.
I had to modify the code ever so slightly. I am using AWS Lambda as the server. The lambda connects to Bedrock using createAmazonBedrock and I get the response from the LLM just fine. When I use streamText I return the result with toDataStreamResponse(). When I use generateText, I just return the data.
Can any one help with what might be going on? What does useChat expect as the response from the server? Is it appropriate to have a promise returned? Any other ideas?
Server code (AWS Lambda)
As Mentioned above, I get the result from the LLM back just fine, so I now that works.
client code A simple react app.
I get my prompted added to the messages, but the response from the server doesn't get added. As mentioned above, I get an empty Json '{}' when using streamText and I get the LLM response as JSON when using generateText. But neither update.
Response with streamText and toDataStreamResponse
Response with generateText
Thoughts? The streamText returns almost immediately while the generateText takes time. Presumably this is due to the promise. Do I need an await or asynch some where?
Beta Was this translation helpful? Give feedback.
All reactions