Skip to content

works with openai and anthropic #71

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

works with openai and anthropic #71

wants to merge 2 commits into from

Conversation

andyllegrand
Copy link
Collaborator

Used litellm to implement streaming with openai api calls. Could not get this to work with anthropic but the original anthropic code is still present and supported.

Streaming can be a little finicky, in order to get the formatted text to work each time a new chunk arrives the previously printed text is erased and replaced with the new text. Sometimes a line is not erased properly, but i don't think this is a big issue.

@andyllegrand
Copy link
Collaborator Author

Also I'm not sure if the way i delete lines from the terminal will work properly on windows, please test this merging

erkinalp

This comment was marked as duplicate.

Copy link
Contributor

@erkinalp erkinalp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

The streaming implementation is a valuable addition that improves the user experience. Here are the key findings and recommendations:

Windows Compatibility ⚠️

The terminal line clearing implementation needs cross-platform handling:

def clear_lines(num_lines):
    for _ in range(num_lines):
        sys.stdout.write('\033[F')
        sys.stdout.write('\033[K')
    sys.stdout.flush()

Recommendation: Use colorama for Windows compatibility.

Memory Management 🔄

Current implementation needs bounds:

full_res = []
async for chunk in response:
    full_res.append(chunk)

Recommendations:

  • Add chunk limits
  • Implement cleanup on errors
  • Add timeout handling

Response Printer 📝

Suggested improvements:

  • Buffer size limits
  • Token state tracking
  • Error recovery
  • Line length constraints

Progress Indication 🔄

Unify progress indication:

  • Add connection feedback
  • Consistent status updates
  • Maintain critical operation spinners

Anthropic Integration 🔧

Current implementation:

res_content = ''
for content_item in response.content:
    if isinstance(content_item, TextBlock):
        res_content += content_item.text

Recommendations:

  • More efficient content handling
  • Investigate streaming options
  • Consider simulated streaming

The changes are well-structured and improve the user experience. With the suggested enhancements for cross-platform support, memory management, and consistent progress indication, it will be even more robust.

Approving with suggestions for improvements.


_This review was conducted by Devin and revised by Erkin Alp. View the Devin run

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants