Remove O(prompt_len) prompt copies#35
Merged
HaibaraAiChan merged 23 commits intoai-decentralized:mainfrom Nov 21, 2025
JiuChen0:main
Merged
Remove O(prompt_len) prompt copies#35HaibaraAiChan merged 23 commits intoai-decentralized:mainfrom JiuChen0:main
HaibaraAiChan merged 23 commits intoai-decentralized:mainfrom
JiuChen0:main
Conversation
- Add --batch_size CLI argument for parallel sequence processing - Add conditional CUDA stream creation for CPU-only mode - Add device-aware ExecutionEnv and Policy resource distribution - Fix MPS compatibility on macOS
HaibaraAiChan
approved these changes
Nov 21, 2025
JiuChen0
added a commit
to JiuChen0/BloomBee
that referenced
this pull request
Mar 22, 2026
* Add batch inference support and CPU compatibility - Add --batch_size CLI argument for parallel sequence processing - Add conditional CUDA stream creation for CPU-only mode - Add device-aware ExecutionEnv and Policy resource distribution - Fix MPS compatibility on macOS * fix hardcode of model loading and support batch size * Resolving dependency conflicts * docs: refine README setup and usage sections for clarity and correctness * Add batch size related updates * delete ddebug output * delete .id files * fix max token size problem * add prompt * Reduce /dev/shm peak usage during warmup/prefill stage * delete dead code * chore: comment out unused compare_tensors function * delete bitsandbytes quant * support flexgen 4bit quant * clean debug output for server id * add effective throughput * clean up unnecessary files * fix the error of start compute time * Use rolling buffer to avoid O(prompt_len) copy on each forward * The debug I/O issue has been fixed * Use rolling buffer to avoid O(prompt_len) copy on each forward --------- Co-authored-by: Danny Willow Liu <dannywillowliu@uchicago.edu> Co-authored-by: root <root@investorairig80.maas>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Remove redundant debug output
prepare_inputs_for_generationprints wheneverinputs_embedsis used, polluting stdout and adding sync overhead. This PR removes the print or switches it to a logger.Eliminate O(prompt_len) prompt copies per step
OptimizedLlamaDecoderLayer.forwardrebuildsoutput_idsand copies the full prompt on every forward call. This PR switches to a rolling buffer that only appends the new token, avoiding unnecessary host→device copies.