-
Notifications
You must be signed in to change notification settings - Fork 126
ESM2 changes to work with vLLM #1473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
gagank1
wants to merge
22
commits into
main
Choose a base branch
from
gkaushik/esm2-vllm
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+2,681
−112
Open
Changes from all commits
Commits
Show all changes
22 commits
Select commit
Hold shift + click to select a range
e415bb9
draft pr - esm2 working with vllm
gagank1 3f9f00e
remove converted checkpoint from git tracking
gagank1 99e8087
testing
gagank1 7383a7c
merge main
gagank1 af0c43c
cleanup
gagank1 e2b3fcd
files from other branch
gagank1 c34c09b
remove zombie files
gagank1 a67df14
addressed feedback
gagank1 8e8a87c
remove unnecessary diff
gagank1 36cdbb2
fix recipes
gagank1 bb3f9e0
cleaned up exported_checkpoint fixture
gagank1 ab9ba4c
Merge remote-tracking branch 'origin/main' into gkaushik/esm2-vllm
gagank1 6779320
fix vllm install - need to build from source for custom torch version
gagank1 6280742
guard vllm tests with skipif
gagank1 aca08b2
add tolerance to test
gagank1 ff4664d
fix transformers version after installing vllm
gagank1 a3d4a9a
cleanup
gagank1 c31c3e8
fix ci
gagank1 5275b40
docs
gagank1 ef612a3
add back removed comment
gagank1 60043cd
restructure into its own recipe
gagank1 c2d2351
remove whitespace
gagank1 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -73,6 +73,7 @@ def __init__( | |
| max_seq_length: Optional[int] = None, | ||
| padded_vocab_size: Optional[int] = 64, | ||
| attn_mask_type: str = "padding", | ||
| add_pooling_layer: bool = False, | ||
| layer_precision: list[str | None] | None = None, | ||
| **kwargs, | ||
| ): | ||
|
|
@@ -103,6 +104,9 @@ def __init__( | |
| padded_vocab_size: The padded vocabulary size to support FP8. If not provided, defaults | ||
| to vocab_size. Must be greater than or equal to vocab_size. | ||
| attn_mask_type: The type of attention mask to use. | ||
| add_pooling_layer: Whether the base model should include a pooling layer. | ||
| Defaults to ``False`` because exported checkpoints do not contain pooler | ||
| weights. Set to ``True`` only if you have a checkpoint with pooler weights. | ||
| layer_precision: Per-layer quantization precision, a list of length ``num_hidden_layers`` | ||
| where each element is ``"fp8"``, ``"fp4"``, or ``None`` (BF16 fallback). ``None`` | ||
| (the default) means no quantization is configured. | ||
|
|
@@ -117,6 +121,7 @@ def __init__( | |
| self.micro_batch_size = micro_batch_size | ||
| self.max_seq_length = max_seq_length | ||
| self.attn_mask_type = attn_mask_type | ||
| self.add_pooling_layer = add_pooling_layer | ||
| self.layer_precision = layer_precision | ||
|
|
||
| # Set padded_vocab_size with default fallback to vocab_size | ||
|
|
@@ -289,7 +294,7 @@ class NVEsmPreTrainedModel(EsmPreTrainedModel): | |
| """An abstract class to handle weights initialization and pretrained model loading.""" | ||
|
|
||
| config_class = NVEsmConfig | ||
| base_model_prefix = "esm" | ||
| base_model_prefix = "model" | ||
| supports_gradient_checkpointing = False | ||
| accepts_loss_kwargs = False | ||
| _no_split_modules = ( | ||
|
|
@@ -305,11 +310,11 @@ def init_empty_weights(self): | |
| if hasattr(module, "reset_parameters"): | ||
| module.reset_parameters() | ||
|
|
||
| # The esm.embeddings layer is the only non-TE layer in this model we need to deal with. We use | ||
| # The embeddings layer is the only non-TE layer in this model we need to deal with. We use | ||
| # `model._init_weights` rather than `reset_parameters` to ensure we honor the original config standard | ||
| # deviation. | ||
| self.esm.embeddings.word_embeddings.to_empty(device="cuda") | ||
| self.esm.embeddings.apply(self._init_weights) | ||
| # deviation. self.base_model resolves to self.model for wrapper classes or self for NVEsmModel. | ||
| self.base_model.embeddings.word_embeddings.to_empty(device="cuda") | ||
| self.base_model.embeddings.apply(self._init_weights) | ||
|
|
||
| # Meta-device init seems to break weight tying, so we re-tie the weights here. | ||
| self.tie_weights() | ||
|
|
@@ -334,14 +339,16 @@ def _init_weights(self, module): | |
| super()._init_weights(module) | ||
|
|
||
| def state_dict(self, *args, **kwargs): | ||
| """Override state_dict to filter out TransformerEngine's _extra_state keys. | ||
| """Override state_dict to filter out non-loadable keys. | ||
|
|
||
| TransformerEngine layers add _extra_state attributes that are not compatible with HuggingFace v5 model loading. | ||
| These are filtered out to ensure checkpoints can be loaded with from_pretrained(). | ||
| Filters out: | ||
| - ``_extra_state`` keys: TransformerEngine-specific, not loadable by HuggingFace v5. | ||
| - ``.inv_freq`` buffers: Computed at init time by RotaryPositionEmbedding, not needed | ||
| in the checkpoint and not loadable by vLLM's AutoWeightsLoader (which only iterates | ||
| over ``named_parameters``, not ``named_buffers``). | ||
| """ | ||
| state_dict = super().state_dict(*args, **kwargs) | ||
| # Filter out _extra_state keys which are TransformerEngine-specific and not loadable | ||
| return {k: v for k, v in state_dict.items() if not k.endswith("_extra_state")} | ||
| return {k: v for k, v in state_dict.items() if not k.endswith("_extra_state") and not k.endswith(".inv_freq")} | ||
|
|
||
|
|
||
| class NVEsmModel(NVEsmPreTrainedModel): | ||
|
|
@@ -350,16 +357,20 @@ class NVEsmModel(NVEsmPreTrainedModel): | |
| This model uses NVDIA's TransformerEngine to optimize attention layer training and inference. | ||
| """ | ||
|
|
||
| def __init__(self, config: NVEsmConfig, add_pooling_layer: bool = True): | ||
| def __init__(self, config: NVEsmConfig, add_pooling_layer: Optional[bool] = None): | ||
| """Initialize a NVEsmModel. | ||
|
|
||
| Args: | ||
| config (NVEsmConfig): The configuration of the model. | ||
| add_pooling_layer (bool): Whether to add a pooling layer. | ||
| add_pooling_layer (bool): Whether to add a pooling layer. If ``None``, | ||
| reads ``config.add_pooling_layer`` (defaults to ``True``). | ||
| """ | ||
| super().__init__(config) | ||
| self.config = config | ||
|
|
||
| if add_pooling_layer is None: | ||
| add_pooling_layer = getattr(config, "add_pooling_layer", True) | ||
|
|
||
| # Ensure pad_token_id is set properly, defaulting to 0 if not specified | ||
| if not hasattr(config, "pad_token_id") or config.pad_token_id is None: | ||
| config.pad_token_id = 0 | ||
|
|
@@ -449,7 +460,9 @@ def forward( | |
| class NVEsmForMaskedLM(NVEsmPreTrainedModel): | ||
| """NVEsmForMaskedLM is a TransformerEngine-optimized ESM model for masked language modeling.""" | ||
|
|
||
| _tied_weights_keys: ClassVar[dict[str, str]] = {"lm_head.decoder.weight": "esm.embeddings.word_embeddings.weight"} | ||
| _tied_weights_keys: ClassVar[dict[str, str]] = { | ||
| "lm_head.decoder.weight": "model.embeddings.word_embeddings.weight" | ||
| } | ||
| _do_not_quantize = ("lm_head.dense", "lm_head.decoder") # Flag for testing that these layers are not quantized. | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. you're deleting |
||
|
|
||
| def __init__(self, config: NVEsmConfig): | ||
|
|
@@ -466,7 +479,7 @@ def __init__(self, config: NVEsmConfig): | |
| "bi-directional self-attention." | ||
| ) | ||
|
|
||
| self.esm = NVEsmModel(config, add_pooling_layer=False) | ||
| self.model = NVEsmModel(config, add_pooling_layer=False) | ||
| self.lm_head = NVEsmLMHead(config) | ||
|
|
||
| self.post_init() | ||
|
|
@@ -501,7 +514,7 @@ def forward( | |
| Returns: | ||
| MaskedLMOutput: The output of the model. | ||
| """ | ||
| outputs = self.esm( | ||
| outputs = self.model( | ||
| input_ids, | ||
| attention_mask=attention_mask, | ||
| position_ids=position_ids, | ||
|
|
@@ -719,7 +732,7 @@ def __init__(self, config): | |
| super().__init__(config) | ||
| self.num_labels = config.num_labels | ||
|
|
||
| self.esm = NVEsmModel(config, add_pooling_layer=False) | ||
| self.model = NVEsmModel(config, add_pooling_layer=False) | ||
| self.dropout = nn.Dropout(config.hidden_dropout_prob) | ||
| self.classifier = transformer_engine.pytorch.Linear( | ||
| config.hidden_size, | ||
|
|
@@ -745,7 +758,7 @@ def forward( | |
| labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): | ||
| Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. | ||
| """ | ||
| outputs = self.esm( | ||
| outputs = self.model( | ||
| input_ids, | ||
| attention_mask=attention_mask, | ||
| position_ids=position_ids, | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[not blocking]: Okay made some changes to convert_esm_hf_to_te