Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove inference batching #111

Merged
merged 23 commits into from
Mar 17, 2025
Merged

Conversation

michaelharrisonmai
Copy link
Collaborator

@michaelharrisonmai michaelharrisonmai commented Mar 15, 2025

Removed batching, swapped some logic of the dataloader / ThreadPoolExecutor / writer in inference.py, refactored the _run methods.

Current flow is: when run is called, load partial results if appropriate, then kick off ThreadPoolExecutor, which calls "_run_single" for each element of the dataloader and then appends the results to the file (note that appending vs writing is new -- let me know if this is not preferred for any reason). _run_single is a combination of the previous _run and _run_par: it checks for previous results, checks for rate limiting, and finally calls the model's generate().

@michaelharrisonmai michaelharrisonmai merged commit 04cc86c into main Mar 17, 2025
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants