Skip to content

The NPU output length is set to 2048 by default. #966

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

zhaohb
Copy link
Contributor

@zhaohb zhaohb commented May 14, 2025

Upgrade to GenAI version 2025.2.0.0.dev20250513 for the latest features and improvements.

@zhaohb zhaohb requested a review from a team as a code owner May 14, 2025 08:36
@zhaohb zhaohb requested review from andrei-kochin and FionaZZ92 May 14, 2025 08:47
@@ -22,6 +22,19 @@ package genai
typedef int (*callback_function)(const char*, void*);

extern int goCallbackBridge(char* input, void* ptr);

static ov_status_e ov_genai_llm_pipeline_create_npu_output_2048(const char* models_path,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be hardcoded to 2048. It should updated to take a variable max_prompt_len and min_response_len so that larger range of sizes can be supported.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @gblong1
To my knowledge, the default output length for the NPU is 1024, which is not dynamically adjustable. This can be referenced in the following link:
genai npu default output len
The decision to set the default to 2048 was made because during testing, it was observed that some responses were being truncated, which is not the intended behavior. This adjustment ensures that responses remain complete and aligned with expectations.

thank you.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it should be made larger, however, it should not be hard coded, it should be configurable so that it doesn't have to be re-hard coded to a bigger number later, and so that some models which need even larger than 2048 (like deepseek) will also work well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree that having a dynamically adjustable output length would be ideal. However, the behavior of the NPU differs from that of the CPU/GPU. While the output length for the CPU/GPU can be set during generation, the NPU requires the output length to be specified at the time of model loading. Once the model is loaded, the output length cannot be modified without reloading the model.
At this stage, making the NPU's output length dynamically adjustable would necessitate reloading the model and involve significant changes. Therefore, I believe setting the output length to 2048 is a practical solution that should meet most common use cases.

@zhaohb zhaohb requested a review from gblong1 May 15, 2025 09:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants