Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LiBai MT5 不支持的op #87

Closed
CPFLAME opened this issue Sep 13, 2022 · 6 comments
Closed

LiBai MT5 不支持的op #87

CPFLAME opened this issue Sep 13, 2022 · 6 comments

Comments

@CPFLAME
Copy link
Contributor

CPFLAME commented Sep 13, 2022

Unsupported ops: Counter({'broadcast_matmul': 89, 'scalar_div': 4, 'gather': 4, 'elementwise_minimum': 3, 'fill_': 2, 'scalar_logical_less': 2, 'where': 2, 'scalar_logical_greater': 1})
@doombeaker
Copy link
Contributor

doombeaker commented Sep 15, 2022

我正在拿这里的需求练手,为防止和其它人的工作重复了,我把已经开 PR 的列在这里:

@daquexian
Copy link
Contributor

fill_ 是 inplace 算子吗,在 job 层是怎么表示的呢

@doombeaker
Copy link
Contributor

fill_ 是 inplace 算子吗

看了下,是 inplace 算子

在 job 层是怎么表示的呢

这个问题我还不懂…… 有请其它路过的人帮忙参与下

@BBuf
Copy link
Contributor

BBuf commented Sep 15, 2022

输出一下job看一下吧。

@doombeaker
Copy link
Contributor

doombeaker commented Sep 15, 2022

输出一下job看一下吧。

我搭了个简单例子:

import tempfile
import oneflow as flow
from oneflow_onnx.oneflow2onnx.util import convert_to_onnx_and_check


class MathOps(flow.nn.Module):
    def __init__(self) -> None:
        super(MathOps, self).__init__()

    def forward(self, x: flow.Tensor) -> flow.Tensor:
        x.fill_(10.0)

        return x


math_ops = MathOps()


class MathOpGraph(flow.nn.Graph):
    def __init__(self):
        super().__init__()
        self.m = math_ops

    def build(self, x):
        out = self.m(x)
        return out


def test_math_ops():

    math_ops_graph = MathOpGraph()
    math_ops_graph._compile(flow.randn(1, 3, 224, 224))

    convert_to_onnx_and_check(math_ops_graph, onnx_model_path="/tmp")


test_math_ops()

输出的 job:

net {
  op {
    name: "_MathOpGraph_0_input.0.0_2"
    device_tag: "cpu"
    scope_symbol_id: 12
    input_conf {
      out: "out"
      blob_conf {
        shape {
          dim: 1
          dim: 3
          dim: 224
          dim: 224
        }
        data_type: kFloat
        is_dynamic: false
        nd_sbp {
          sbp_parallel {
            broadcast_parallel {
            }
          }
        }
      }
    }
  }
  op {
    name: "m-fill_-0"
    device_tag: "cpu"
    scope_symbol_id: 16
    loc: "Python Stack[-2]: \'build\' at \'onnx_mapping.py\': line 40; Python Stack[-1]: \'forward\' at \'onnx_mapping.py\': line 26;  ... more"
    user_conf {
      op_type_name: "fill_"
      input {
        key: "in"
        value {
          s: "_MathOpGraph_0_input.0.0_2/out"
        }
      }
      output {
        key: "out"
        value {
          s: "m-fill_-0/out_0"
        }
      }
      attr {
        key: "floating_value"
        value {
          at_double: 10.0
        }
      }
      attr {
        key: "integral_value"
        value {
          at_int64: 0
        }
      }
      attr {
        key: "is_floating_value"
        value {
          at_bool: true
        }
      }
      input_order: "in"
      output_order: "out"
    }
  }
  op {
    name: "_MathOpGraph_0_output.0.0_2"
    device_tag: "cpu"
    scope_symbol_id: 12
    output_conf {
      in: "m-fill_-0/out_0"
      out: "out"
      blob_conf {
        shape {
          dim: 1
          dim: 3
          dim: 224
          dim: 224
        }
        data_type: kFloat
        is_dynamic: false
        nd_sbp {
          sbp_parallel {
            broadcast_parallel {
            }
          }
        }
      }
    }
  }
}
placement {
  placement_group {
    op_set {
      op_name: "_MathOpGraph_0_input.0.0_2"
      op_name: "m-fill_-0"
      op_name: "_MathOpGraph_0_output.0.0_2"
    }
    parallel_conf {
      device_name: "@0:0"
      device_tag: "cpu"
      hierarchy {
        dim: 1
      }
    }
  }
  blob_placement_group {
    lbi {
      op_name: "_MathOpGraph_0_input.0.0_2"
      blob_name: "out"
    }
    lbi {
      op_name: "m-fill_-0"
      blob_name: "out_0"
    }
    lbi {
      op_name: "_MathOpGraph_0_output.0.0_2"
      blob_name: "out"
    }
    parallel_conf {
      device_name: "@0:0"
      device_tag: "cpu"
      hierarchy {
        dim: 1
      }
    }
  }
}
job_conf {
  job_name: "MathOpGraph_0"
  predict_conf {
  }
}
job_parallel_view_conf {
  op_name2sbp_signature_conf {
    key: "_MathOpGraph_0_input.0.0_2"
    value {
      bn_in_op2sbp_parallel {
        key: "out"
        value {
          broadcast_parallel {
          }
        }
      }
    }
  }
  op_name2sbp_signature_conf {
    key: "_MathOpGraph_0_output.0.0_2"
    value {
      bn_in_op2sbp_parallel {
        key: "in"
        value {
          broadcast_parallel {
          }
        }
      }
      bn_in_op2sbp_parallel {
        key: "out"
        value {
          broadcast_parallel {
          }
        }
      }
    }
  }
  op_name2sbp_signature_conf {
    key: "m-fill_-0"
    value {
      bn_in_op2sbp_parallel {
        key: "in_0"
        value {
          broadcast_parallel {
          }
        }
      }
      bn_in_op2sbp_parallel {
        key: "out_0"
        value {
          broadcast_parallel {
          }
        }
      }
    }
  }
  op_name2nd_sbp_signature_conf {
    key: "_MathOpGraph_0_input.0.0_2"
    value {
      bn_in_op2nd_sbp {
        key: "out"
        value {
          sbp_parallel {
            broadcast_parallel {
            }
          }
        }
      }
    }
  }
  op_name2nd_sbp_signature_conf {
    key: "_MathOpGraph_0_output.0.0_2"
    value {
      bn_in_op2nd_sbp {
        key: "in"
        value {
          sbp_parallel {
            broadcast_parallel {
            }
          }
        }
      }
      bn_in_op2nd_sbp {
        key: "out"
        value {
          sbp_parallel {
            broadcast_parallel {
            }
          }
        }
      }
    }
  }
  op_name2nd_sbp_signature_conf {
    key: "m-fill_-0"
    value {
      bn_in_op2nd_sbp {
        key: "in_0"
        value {
          sbp_parallel {
            broadcast_parallel {
            }
          }
        }
      }
      bn_in_op2nd_sbp {
        key: "out_0"
        value {
          sbp_parallel {
            broadcast_parallel {
            }
          }
        }
      }
    }
  }
}
helper {
  lbn2logical_blob_desc {
    key: "_MathOpGraph_0_input.0.0_2/out"
    value {
      shape {
        dim: 1
        dim: 3
        dim: 224
        dim: 224
      }
      stride {
        dim: 150528
        dim: 50176
        dim: 224
        dim: 1
      }
      data_type: kFloat
      is_dynamic: false
    }
  }
  lbn2logical_blob_desc {
    key: "_MathOpGraph_0_output.0.0_2/out"
    value {
      shape {
        dim: 1
        dim: 3
        dim: 224
        dim: 224
      }
      stride {
        dim: 150528
        dim: 50176
        dim: 224
        dim: 1
      }
      data_type: kFloat
      is_dynamic: false
    }
  }
  lbn2logical_blob_desc {
    key: "m-fill_-0/out_0"
    value {
      shape {
        dim: 1
        dim: 3
        dim: 224
        dim: 224
      }
      stride {
        dim: 150528
        dim: 50176
        dim: 224
        dim: 1
      }
      data_type: kFloat
      is_dynamic: false
    }
  }
  op_name2arg_signature {
    key: "_MathOpGraph_0_input.0.0_2"
    value {
      bn_in_op2lbi {
        key: "out"
        value {
          op_name: "_MathOpGraph_0_input.0.0_2"
          blob_name: "out"
        }
      }
    }
  }
  op_name2arg_signature {
    key: "_MathOpGraph_0_output.0.0_2"
    value {
      bn_in_op2lbi {
        key: "in"
        value {
          op_name: "m-fill_-0"
          blob_name: "out_0"
        }
      }
      bn_in_op2lbi {
        key: "out"
        value {
          op_name: "_MathOpGraph_0_output.0.0_2"
          blob_name: "out"
        }
      }
    }
  }
  op_name2arg_signature {
    key: "m-fill_-0"
    value {
      bn_in_op2lbi {
        key: "in_0"
        value {
          op_name: "_MathOpGraph_0_input.0.0_2"
          blob_name: "out"
        }
      }
      bn_in_op2lbi {
        key: "out_0"
        value {
          op_name: "m-fill_-0"
          blob_name: "out_0"
        }
      }
    }
  }
}
module_name2module_conf {
  key: "MathOpGraph_0"
  value {
    name: "MathOpGraph_0"
    ops: "_MathOpGraph_0_input.0.0_2"
    ops: "_MathOpGraph_0_output.0.0_2"
  }
}
module_name2module_conf {
  key: "m"
  value {
    name: "m"
    ops: "m-fill_-0"
  }
}

@BBuf
Copy link
Contributor

BBuf commented Sep 22, 2022

@CPFLAME t5的op都支持了

@BBuf BBuf closed this as completed Sep 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants