Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] nnfusion jit data type mismatch with float16 precision #446

Open
LeiWang1999 opened this issue Jul 7, 2022 · 0 comments
Open

[BUG] nnfusion jit data type mismatch with float16 precision #446

LeiWang1999 opened this issue Jul 7, 2022 · 0 comments
Labels
bug Something isn't working

Comments

@LeiWang1999
Copy link
Contributor

🐛 Bug. nnfusion jit data type mismatch with float16 precision

Use nnfusion python interface to compile and build a resnet50.float16.onnx model, I got several issues.

Firstly, nnfusion jit currently doesn't support float16 datatype in dtypes.py :

str2type = {
    "float":
    TypeObject._make(["float32", ctypes.c_float, torch.float32,
                      numpy.float32]),
    "float32":
    TypeObject._make(["float32", ctypes.c_float, torch.float32,
                      numpy.float32]),
    "double":
    TypeObject._make(
        ["float64", ctypes.c_double, torch.float64, numpy.float64]),
    "float64":
    TypeObject._make(["float64", ctypes.c_double, torch.float64,
                      numpy.float64]),
    "int8":
    TypeObject._make(["int8", ctypes.c_int8, torch.int8, numpy.int8]),
    "int16":
    TypeObject._make(["int16", ctypes.c_int16, torch.int16, numpy.int16]),
    "int32":
    TypeObject._make(["int32", ctypes.c_int32, torch.int32, numpy.int32]),
    "int64":
    TypeObject._make(["int64", ctypes.c_int64, torch.int64, numpy.int64]),
    "uint8":
    TypeObject._make(["uint8", ctypes.c_uint8, torch.uint8, numpy.uint8]),
    "uint16":
    TypeObject._make(["uint8", ctypes.c_uint16, None, numpy.uint16]),
    "uint32":
    TypeObject._make(["uint8", ctypes.c_uint32, None, numpy.uint32]),
    "uint64":
    TypeObject._make(["uint8", ctypes.c_uint64, None, numpy.uint64]),
}

I append float16 datatype map to solve this problem, but the followed issue comes up:

Traceback (most recent call last):
  File "resnet.py", line 53, in <module>
    nnf_execute()
  File "resnet.py", line 48, in nnf_execute
    executor({input_name: data_desc}, {output_name: nnf_out_desc})
  File "/workspace/v-leiwang3/nnfusion/src/python/nnfusion/executor.py", line 181, in __call__
    self.feed_data(*args, **kwargs)
  File "/workspace/v-leiwang3/nnfusion/src/python/nnfusion/executor.py", line 203, in feed_data
    raise Exception(
Exception: Shape or type mismatch for NNFusion model input input, expect [(64, 3, 224, 224), half], feed [(64, 3, 224, 224), float16]

This was caused by the para_info.json generated by nnfusion codegen, the input and output datatype is half while the nnfusion jit gave the datatype float16 and caused this mismatch exception.

{
    "input": {
        "input": {
            "id": "((int64_t*)(inputs[0]))",
            "name": "Parameter_351_0",
            "shape": [
                1,
                512
            ]
        }
    },
    "output": {
        "output": {
            "id": "((half*)(outputs[0]))",
            "name": "Result_1912_0",
            "shape": [
                1,
                100
            ]
        }
    }
}
@LeiWang1999 LeiWang1999 added the bug Something isn't working label Jul 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant