-
Notifications
You must be signed in to change notification settings - Fork 25
[WIP]Add initial support for transpiler #3063
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flake8 found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
bbf8004 to
74abf84
Compare
| """ | ||
| Operation registry - shared across all frontends. | ||
| """ | ||
| from typing import Dict, Type |
Check notice
Code scanning / CodeQL
Unused import Note
Import of 'Type' is not used.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 22 hours ago
To fix the issue, remove the unused imports from the import statement on line 4 in forge/forge/transpiler/core/registry.py. Specifically, delete Dict and Type from the from typing import ... statement. Since neither Dict nor Type are used in this file, no further action is required. Ensure that only the line with the unused imports is removed or amended, leaving the remaining code unchanged.
| @@ -1,7 +1,6 @@ | ||
| """ | ||
| Operation registry - shared across all frontends. | ||
| """ | ||
| from typing import Dict, Type | ||
| from ..ir.nodes import TIRNode, get_op_registry as _get_op_registry | ||
|
|
||
| # Re-export for convenience |
| Operation registry - shared across all frontends. | ||
| """ | ||
| from typing import Dict, Type | ||
| from ..ir.nodes import TIRNode, get_op_registry as _get_op_registry |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 22 hours ago
To fix this issue, we should remove the unused import of TIRNode from the import statement on line 5 of forge/forge/transpiler/core/registry.py. This can be accomplished by editing the import line to only import get_op_registry as _get_op_registry from ..ir.nodes, thereby eliminating the unnecessary dependency and reducing clutter in the code. No other changes need to be made, as there are no references to TIRNode elsewhere in the provided code.
-
Copy modified line R5
| @@ -2,7 +2,7 @@ | ||
| Operation registry - shared across all frontends. | ||
| """ | ||
| from typing import Dict, Type | ||
| from ..ir.nodes import TIRNode, get_op_registry as _get_op_registry | ||
| from ..ir.nodes import get_op_registry as _get_op_registry | ||
|
|
||
| # Re-export for convenience | ||
| get_op_registry = _get_op_registry |
| """ | ||
| ONNX-specific utility functions. | ||
| """ | ||
| import onnx |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 22 hours ago
The best way to fix the problem is to simply remove the unused import onnx statement from line 4 in forge/forge/transpiler/frontends/onnx/converters/utils.py. This cleans up the code by removing unnecessary dependencies, as all necessary symbols from onnx are already imported via from onnx import ModelProto. No other changes to code or dependencies are required.
| @@ -1,7 +1,6 @@ | ||
| """ | ||
| ONNX-specific utility functions. | ||
| """ | ||
| import onnx | ||
| from onnx import ModelProto | ||
| from typing import List | ||
|
|
|
|
||
| from .engine import ONNXToForgeTranspiler | ||
| from .codegen import generate_forge_module | ||
| from .graph import TIRGraph |
Check notice
Code scanning / CodeQL
Unused import Note test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 22 hours ago
To fix the problem, simply delete the line from .graph import TIRGraph from forge/forge/transpiler/test_verify.py, as this import is not used anywhere in the provided code. No additional changes, imports, or definitions are required elsewhere in the file. Double-check that no other usage or definition of TIRGraph exists in the given snippet before removal.
| @@ -10,7 +10,6 @@ | ||
|
|
||
| from .engine import ONNXToForgeTranspiler | ||
| from .codegen import generate_forge_module | ||
| from .graph import TIRGraph | ||
|
|
||
|
|
||
| def verify_model(): |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3063 +/- ##
=======================================
Coverage 63.55% 63.55%
=======================================
Files 156 156
Lines 11908 11908
=======================================
Hits 7568 7568
Misses 4340 4340 ☔ View full report in Codecov by Sentry. |
74abf84 to
8d274e1
Compare
| from onnx import numpy_helper, shape_inference | ||
| import torch | ||
| import logging | ||
| from typing import Dict, List, Any, Optional, Tuple |
Check notice
Code scanning / CodeQL
Unused import Note
Import of 'Dict' is not used.
Import of 'List' is not used.
Import of 'Any' is not used.
Import of 'Tuple' is not used.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
To fix this issue, simply remove Optional from the from typing import ... import statement on line 8 in forge/forge/transpiler/frontends/onnx/engine.py. Leave the rest of the imports unchanged, and do not remove the entire line if other types are still used elsewhere in the file. This change ensures that the code does not include unnecessary dependencies, reduces potential confusion, and follows best practices for code hygiene. No new functionality or external dependencies are required for this fix.
-
Copy modified line R8
| @@ -5,7 +5,7 @@ | ||
| from onnx import numpy_helper, shape_inference | ||
| import torch | ||
| import logging | ||
| from typing import Dict, List, Any, Optional, Tuple | ||
| from typing import Dict, List, Any, Tuple | ||
| from typing import List as ListType | ||
|
|
||
| from ...ir.types import TensorInfo, onnx_dtype_to_torch_dtype |
| import torch | ||
| import logging | ||
| from typing import Dict, List, Any, Optional, Tuple | ||
| from typing import List as ListType |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
To fix this problem, we should remove the unused import statement, specifically the line from typing import List as ListType. This will clean up the unnecessary dependency, improving code readability and slightly reducing module load time. No other sections of the presented code are dependent on ListType being present, so removal is safe and cannot affect existing functionality.
Locate the from typing import List as ListType line (line 9) in forge/forge/transpiler/frontends/onnx/engine.py and delete it. No further changes are necessary.
| @@ -6,7 +6,6 @@ | ||
| import torch | ||
| import logging | ||
| from typing import Dict, List, Any, Optional, Tuple | ||
| from typing import List as ListType | ||
|
|
||
| from ...ir.types import TensorInfo, onnx_dtype_to_torch_dtype | ||
| from ...ir.nodes import TIRNode |
| from typing import List as ListType | ||
|
|
||
| from ...ir.types import TensorInfo, onnx_dtype_to_torch_dtype | ||
| from ...ir.nodes import TIRNode |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
The best way to fix this problem is to simply delete the unused import statement: from ...ir.nodes import TIRNode from line 12 of forge/forge/transpiler/frontends/onnx/engine.py. This removes unnecessary code and avoids confusion about where dependencies exist. No further changes are required since no code in the shown snippet references TIRNode.
| @@ -9,7 +9,6 @@ | ||
| from typing import List as ListType | ||
|
|
||
| from ...ir.types import TensorInfo, onnx_dtype_to_torch_dtype | ||
| from ...ir.nodes import TIRNode | ||
| from ...ir.operations.generic import GenericNode | ||
| from ...ir.operations.arithmetic import AddNode, SubNode, MulNode, DivNode, MatMulNode | ||
| from ...ir.operations.conv import Conv1dNode, Conv2dNode, Conv3dNode |
| # Determine conv dimension | ||
| if isinstance(kernel_shape, int): | ||
| kernel_dims = 1 | ||
| kernel_size = kernel_shape |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
The best way to fix this problem is to delete the assignment to the unused variable kernel_size, taking care not to remove any right-hand-side expressions that might have side effects (in this case, there are none; the assignment is straightforward). Specifically, lines 227 and 230 handle assignment to kernel_size, but since it is never used, both assignments should be removed, and we should restructure the conditional so that all references to kernel_size are omitted. The surrounding logic relies only on kernel_dims, which is used, so the fix amounts to removing the assignment lines for kernel_size (lines 227 and 230), and not introducing any variable in its place.
No imports, definitions, or method changes are needed—simply remove the assignments for the unused variable.
| @@ -224,10 +224,8 @@ | ||
| # Determine conv dimension | ||
| if isinstance(kernel_shape, int): | ||
| kernel_dims = 1 | ||
| kernel_size = kernel_shape | ||
| else: | ||
| kernel_dims = len(kernel_shape) | ||
| kernel_size = kernel_shape[0] if len(kernel_shape) == 1 else kernel_shape | ||
|
|
||
| # Handle AUTO_PAD | ||
| auto_pad = attrs.get('auto_pad', 'NOTSET') |
| kernel_size = kernel_shape | ||
| else: | ||
| kernel_dims = len(kernel_shape) | ||
| kernel_size = kernel_shape[0] if len(kernel_shape) == 1 else kernel_shape |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
To fix the issue, simply remove the assignment to kernel_size since it is not being used anywhere in the provided code. Deleting this assignment avoids confusion, clarifies intent, and eliminates the warning flagged by the static analysis tool. Since kernel_size does not appear to have any side effects on the right-hand side (it's just a value extracted from kernel_shape), this change is safe and has no effect on functionality.
Edit only line 230 in forge/forge/transpiler/frontends/onnx/engine.py to remove the assignment to kernel_size, leaving assignments to kernel_dims untouched. No other code or imports need to be altered.
| @@ -227,7 +227,6 @@ | ||
| kernel_size = kernel_shape | ||
| else: | ||
| kernel_dims = len(kernel_shape) | ||
| kernel_size = kernel_shape[0] if len(kernel_shape) == 1 else kernel_shape | ||
|
|
||
| # Handle AUTO_PAD | ||
| auto_pad = attrs.get('auto_pad', 'NOTSET') |
|
|
||
| from ..nodes import TIRNode, register_op | ||
| from ..types import TensorInfo | ||
| from ...frontends.onnx.converters.autopad import AutoPad |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
To fix the issue, we should delete the import statement: from ...frontends.onnx.converters.autopad import AutoPad on line 10 of forge/forge/transpiler/ir/operations/pooling.py. There is no need to add any replacement or substitute code, as no references to AutoPad exist in the shown code. This change preserves the current functionality while cleaning up an unnecessary import.
| @@ -7,7 +7,6 @@ | ||
|
|
||
| from ..nodes import TIRNode, register_op | ||
| from ..types import TensorInfo | ||
| from ...frontends.onnx.converters.autopad import AutoPad | ||
|
|
||
|
|
||
| @register_op("MaxPool") |
|
|
||
| # Determine function name based on kernel_size | ||
| if isinstance(kernel_size, int): | ||
| ndim = 1 |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
To fix this issue, simply remove the statement assigning ndim = 1 on line 34 in the MaxPoolNode.create static method. This change does not affect functionality because the value is not used in any subsequent code or logic for this branch. Care should be taken not to remove any assignment with side effects, but here the right hand side is a constant and has no side effects. Only this line needs to be deleted.
| @@ -31,7 +31,6 @@ | ||
|
|
||
| # Determine function name based on kernel_size | ||
| if isinstance(kernel_size, int): | ||
| ndim = 1 | ||
| func_name = "forge.op.MaxPool1d" | ||
| elif isinstance(kernel_size, (list, tuple)): | ||
| ndim = len(kernel_size) |
|
|
||
| # Determine function name based on kernel_size | ||
| if isinstance(kernel_size, int): | ||
| ndim = 1 |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
To fix the problem, remove the unused assignment to the variable ndim on line 109 within the create method of the AveragePoolNode class. Since its assignment has no side effects and is not referenced elsewhere, delete just the line containing ndim = 1 and leave the rest of the method unchanged. No additional methods, imports, or definitions are required. Only edit forge/forge/transpiler/ir/operations/pooling.py at line 109.
| @@ -106,7 +106,6 @@ | ||
|
|
||
| # Determine function name based on kernel_size | ||
| if isinstance(kernel_size, int): | ||
| ndim = 1 | ||
| func_name = "forge.op.AvgPool1d" | ||
| elif isinstance(kernel_size, (list, tuple)): | ||
| ndim = len(kernel_size) |
| Reduction operations: ReduceSum, ReduceMean, ReduceMax | ||
| """ | ||
| import torch | ||
| from typing import Dict, List, Optional, Union, Tuple |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
The best way to fix the problem is to remove the unused import of Optional from the import statement on line 5 in forge/forge/transpiler/ir/operations/reduction.py. We should not remove the entire typing import, but only delete Optional to prevent unnecessary dependencies, while leaving the other imports (Dict, List, Union, Tuple) intact because they are used in type annotations throughout the file. No further changes are required.
-
Copy modified line R5
| @@ -2,7 +2,7 @@ | ||
| Reduction operations: ReduceSum, ReduceMean, ReduceMax | ||
| """ | ||
| import torch | ||
| from typing import Dict, List, Optional, Union, Tuple | ||
| from typing import Dict, List, Union, Tuple | ||
|
|
||
| from ..nodes import TIRNode, register_op | ||
| from ..types import TensorInfo |
| """ | ||
| Type definitions for the transpiler IR. | ||
| """ | ||
| import onnx |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 15 hours ago
To fix the problem, simply remove the unused import onnx statement on line 4 of forge/forge/transpiler/ir/types.py. This eliminates the redundant dependency and conforms to Python best practices, making the code cleaner and easier to maintain. No further changes are required, since all necessary references to ONNX are via the explicit import from onnx import TensorProto.
| @@ -1,7 +1,6 @@ | ||
| """ | ||
| Type definitions for the transpiler IR. | ||
| """ | ||
| import onnx | ||
| from onnx import TensorProto | ||
| import torch | ||
| from typing import Optional, Tuple |
Summary
This PR introduces a new ONNX Transpiler Framework that converts ONNX
ModelProtoinputs into a fully executable Transpiler Intermediate Representation (TIR) and generates Forge Module. The transpiler provides a modular flow covering ONNX parsing, intermediate graph construction, PyTorch-based computation, attribute transformation, and detailed per-node debugging support.This establishes the foundation for replacing the existing TVM-based workflow with a complete in-house transpilation pipeline.
Key Features
1. ONNX Frontend & Input Handling
ModelProtoas input.2. TIR Graph Construction
TIRNodeobjects.TIRNodestores operator name, inputs/outputs, attributes, and execution details.3. TIRNode Operation Pipeline
Each ONNX operation goes through the following pipeline:
Framework Attrs → Torch Attrs Conversion
Converts ONNX operator attributes (e.g.,
kernel_shape,pads,strides) into PyTorch-compatible formats.Torch Attrs → Forge Attrs Conversion
Normalizes PyTorch attributes into Forge-specific parameters used for final lowering.
PyTorch-Based Computation
Utilizes
torch.*operators for numerical computation and validation.Forge CodeGen Emission
Generates Forge operation metadata including op name, function, inputs, outputs, and attributes.
4. Executable TIR Graph
5. Debug Mode & Validation
Workflow Overview