[Renderer] Remove InputPreprocessor#38688
[Renderer] Remove InputPreprocessor#38688DarkLight1337 wants to merge 4 commits intovllm-project:mainfrom
InputPreprocessor#38688Conversation
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
There was a problem hiding this comment.
Code Review
This pull request removes the InputPreprocessor and associated logic, transitioning the engine to strictly accept pre-processed EngineInput instead of raw PromptType or EngineCoreRequest. It also removes tokenization_kwargs from several core method signatures across protocol.py, async_llm.py, and input_processor.py. Review feedback identifies that the encode method signatures in both the protocol and AsyncLLM were missed during this refactor and still include the deprecated PromptType. Additionally, an internal call within _add_streaming_input_request needs to be updated to provide a valid EngineInput dictionary to avoid runtime errors.
|
Hi @DarkLight1337, the pre-commit checks have failed. Please run: uv pip install pre-commit>=4.5.1
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
|
Hi @DarkLight1337, the pre-commit checks have failed. Please run: uv pip install pre-commit>=4.5.1
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
Purpose
Now that #28631 has been merged, we can remove support for unprocessed inputs to the
AsyncLLM.Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.