Skip to content
This repository was archived by the owner on Oct 23, 2023. It is now read-only.
This repository was archived by the owner on Oct 23, 2023. It is now read-only.

Issue with inferring shapes in example model #65

@jmitrevs

Description

@jmitrevs

If I create an onnx file with this sample script and input.txt:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# Define simple MLP architecture
class MLP(nn.Module):

    def __init__(self):
        super(MLP, self).__init__()
        # Two layer MLP, ingesting a single frame of BLM data
        self.layer1 = nn.Linear(259, 128)
        self.layer2 = nn.Linear(128, 259*2)

    def forward(self, x):
        x = F.relu(self.layer1(x))
        x = torch.sigmoid(self.layer2(x))
        return x

# Training function
def run_inference() -> None:

    # Instantiate the MLP model
    model = MLP()
    # Fix random seed
    np.random.seed(0)

    # Generate weight tensors
    w1 = torch.tensor(np.random.normal(loc=0, scale=0.1, size=(128, 259)).astype(np.single))
    b1 = torch.tensor(np.random.normal(loc=0, scale=0.1, size=128).astype(np.single))

    w2 = torch.tensor(np.random.normal(loc=0, scale=0.1, size=(259*2, 128)).astype(np.single))
    b2 = torch.tensor(np.random.normal(loc=0, scale=0.1, size=259*2).astype(np.single))

    # Single inference step
    with torch.no_grad():

        # Load the fixed weights
        model.layer1.weight = nn.parameter.Parameter(w1)
        model.layer1.bias = nn.parameter.Parameter(b1)

        model.layer2.weight = nn.parameter.Parameter(w2)
        model.layer2.bias = nn.parameter.Parameter(b2)

        # Load the input data and add a batch dimension
        input_data = torch.from_numpy(np.loadtxt('input.txt', dtype=np.single)).unsqueeze(0)

        # Inference
        out = model(input_data)

        # Save in ONNX format
        torch.onnx.export(model,  # model being run
                          input_data,  # model input (or a tuple for multiple inputs)
                          "MLP.onnx")

if __name__ == '__main__':
    run_inference()

(the produced ONNX file is available at: https://drive.google.com/file/d/1wt6ub3cChvPD-XM4-7keuTy5dC5wdVZk/view?usp=sharing)

it seems that infer_shapes from the cleaning fails:

(fastml) mac-137349:validation jmitrevs$ qonnx-cleanup MLP.onnx 
(fastml) mac-137349:validation jmitrevs$ qonnx-exec MLP_clean.onnx 
Traceback (most recent call last):
  File "/Users/jmitrevs/fastml/bin/qonnx-exec", line 33, in <module>
    sys.exit(load_entry_point('qonnx', 'console_scripts', 'qonnx-exec')())
  File "/Users/jmitrevs/work/qonnx/src/qonnx/util/exec_qonnx.py", line 43, in main
    clize.run(exec_qonnx)
  File "/Users/jmitrevs/fastml/lib/python3.9/site-packages/sigtools/modifiers.py", line 158, in __call__
    return self.func(*args, **kwargs)
  File "/Users/jmitrevs/fastml/lib/python3.9/site-packages/clize/runner.py", line 363, in run
    ret = cli(*args)
  File "/Users/jmitrevs/fastml/lib/python3.9/site-packages/clize/runner.py", line 220, in __call__
    return func(*posargs, **kwargs)
  File "/Users/jmitrevs/work/qonnx/src/qonnx/util/exec_qonnx.py", line 35, in exec_qonnx
    odict = execute_onnx(model, idict)
  File "/Users/jmitrevs/work/finn-base/src/finn/core/onnx_exec.py", line 147, in execute_onnx
    raise Exception("Found unspecified tensor shapes, try infer_shapes")
Exception: Found unspecified tensor shapes, try infer_shapes

The problem is that model.get_tensor_shape('Gemm_0_param0') returns []. I do not understand the behavior.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions