You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 3, 2026. It is now read-only.
/content/BLIP/models/med.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
176
177 # Take the dot product between "query" and "key" to get the raw attention scores.
--> 178 attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
179
180 if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
RuntimeError: The size of tensor a (3) must match the size of tensor b (9) at non-singleton dimension 0