[Torch FX] Map Namespace Names to Metatype#3237
[Torch FX] Map Namespace Names to Metatype#3237anzr299 wants to merge 34 commits intoopenvinotoolkit:developfrom
Conversation
remove unnecesary print remove unnecesary FX metatype Registry pre commit fix
|
@anzr299, please fix the pre-commit: |
|
A test case have caught the issue with an embedding model here ynimmaga/executorch#26 |
… metatypes. Update reference graphs
| """ | ||
| super().__init__(name) | ||
| self._op_name_to_op_meta_dict: Dict[str, Type[OperatorMetatype]] = {} | ||
| self._func_name_to_op_meta_dict: Dict[str, Type[OperatorMetatype]] = {} |
|
|
||
| input_port_id = dist_node.all_input_nodes.index(source_node) | ||
| return input_port_id, output_port_id, tensor_shape | ||
| return input_port_id, output_port_id, tensor_shape, tensor_dtype |
There was a problem hiding this comment.
Need to update return annotation
|
|
||
| PT_OPERATOR_METATYPES = OperatorMetatypeRegistry("operator_metatypes") | ||
| FX_OPERATOR_METATYPES = OperatorMetatypeRegistry("operator_metatypes") | ||
| PT2_OPERATOR_METATYPES = OperatorMetatypeRegistry("operator_metatypes") |
There was a problem hiding this comment.
No need to add extra registry use only one, but use full operation name as op_name
| @property | ||
| def func_namespace(self) -> str: | ||
| if self.func.__qualname__.split(".")[0] == "TensorBase": | ||
| return f"torch.tensor.{str(self.func.__name__)}" |
There was a problem hiding this comment.
no need to use str within f-string
There was a problem hiding this comment.
oh, I missed that when changing from an older implementation. Thank you very much!
| return self.func.__name__ | ||
|
|
||
| @property | ||
| def func_namespace(self) -> str: |
There was a problem hiding this comment.
Change func_name property, there is need only one way to get name
| if self.func.__qualname__.split(".")[0] == "TensorBase": | ||
| return f"torch.tensor.{str(self.func.__name__)}" | ||
| elif self.func.__qualname__ == self.func.__name__: | ||
| return f"torch.nn.functional.{str(self.func.__name__)}" |
There was a problem hiding this comment.
It's not correctly works for function from torchvision like deform_conv2d or symmetric_quantize in nncf
So better use f"{func.__module__}.{func.__name__}" as op_name for functions and func.__qualname__ for methods.
And using NamespaceTarget in definition of metatypes is actual only for patched functions, for FX and TorchFunctionMode tracing is it should not be used.
There was a problem hiding this comment.
Sorry I did not understand the suggestion "f"{func.module}.{func.name}" as op_name for functions and func.qualname for methods" do you suggest using these directly? because for example torch.tensor.to does not have a module but qualname returns TensorBase.to
There was a problem hiding this comment.
yes,
torch.tensor.to is method and can not be call as torch.tensor.to and methods is not contain __module__
It can used only for instance of Tensor torch.tensor([1]).to.__qualname__
So i have suggest to use qualname for it
There was a problem hiding this comment.
Okay, so I will modify the metatype definition to be tensorbase.to instead of torch.tensor.to so that it can match with the correct metatype? or do I create a fake tensor and call Tensor([1]).to.__qualname__ on it?
review fixes
Changes
Created a new dictionary in operator metatypes registry to maintain mapping of namespace and function name to metatype.
Reason for changes
For a more accurate retrieval of operation metatype.
Tests
original graph tests and metatypes were used to check the correctness of the change.