Skip to content

Seeking Workflow Modification for Real-Time Asynchronous Output in Llama Deploy Example #366

Open
@nmhjklnm

Description

@nmhjklnm

In the example provided at Llama Deploy Python Fullstack, the final output of the workflow is non-streaming. It only produces results once all generated tokens are complete.

As a result, I had to create my own FastAPI service.

Question:

Is there a more suitable modification to the workflow or llama-deploy that allows for direct and true asynchronous output without having to write my own FastAPI service?

Objective:

My main goal is to deliver the results of the workflow to users as quickly as possible.

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    • Status

      No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions