Relay算子注册(在pytorch.py端调用)

发布于:2025-05-10 ⋅ 阅读:(11) ⋅ 点赞:(0)

1. Relay算子注册 (C++层)

(a) 算子属性注册

路径: src/relay/op/nn/nn.cc

RELAY_REGISTER_OP("hardswish")
  .set_num_inputs(1)
  .add_argument("data", "Tensor", "Input tensor.")
  .set_support_level(3)
  .add_type_rel("Identity", Identity);
(b) 调用节点构造

路径: src/relay/op/nn/activation.cc

TVM_REGISTER_GLOBAL("relay.op._make.hardswish")
  .set_body_typed([](Expr data) {
    static const Op& op = Op::Get("hardswish");
    return Call(op, {data}, Attrs(), {});
  });

2. TOPI计算实现 (C++层)

© TOPI注册入口

路径: src/topi/elemwise.cc

TVM_REGISTER_GLOBAL("topi.hardswish")
  .set_body([](TVMArgs args, TVMRetValue* rv) {
    *rv = hardswish(args[0]);
  });
(d) 数学内核实现

路径: include/tvm/topi/nn.h

inline Tensor hardswish(const Tensor& x, std::string name = "T_hardswish") {
  auto three = make_const(x->dtype, 3);
  auto six = make_const(x->dtype, 6);
  return compute(
    x->shape,
    [&](const Array<Var>& i) {
      return x(i) * max(min(x(i) + three, six), 0) / six;
    },
    name, kElementWise
  );
}

3. Python接口层

(e) Relay Python API

路径: python/tvm/relay/op/nn/_nn.py

def hardswish(data):
    return _make.hardswish(data)
(f) TOPI通用接口

路径: python/tvm/topi/nn.py

@tvm.target.generic_func
def hardswish(x):
    return cpp.hardswish(x)

4. 计算调度注册

(g) Compute注册

路径: python/tvm/relay/op/strategy/generic.py

@register_compute("hardswish")
def hardswish_compute(attrs, inputs, out_type):
    return [topi.hardswish(inputs[0])]
(h) 调度策略

路径: `python/tvm/relay/op/op.py**

register_broadcast_schedule("hardswish")
register_shape_func("hardswish", False, elemwise_shape_func)

5. 硬件专用实现

(i) NPU支持声明

路径: `src/relay/backend/contrib/npu/src/op_map.cc**

const std::vector<std::string> _NPU_OP = {
  ...,
  "hardswish"  // 添加算子名
};
(j) NPU内核实现

路径: `python/tvm/relay/backend/contrib/npu/ops.py**

def custom_hardswish(x):
    x1 = custom_add(x, te.extern_scalar_value(3.0))
    x2 = custom_relu(x1)
    return npu_hardwish(x2, ...)
(k) NPU策略注册

路径: `python/tvm/relay/op/strategy/npu.py**

@hardswish.register("npu")
def hardswish_npu(x):
    return npu_api.custom_hardswish(x)

6. 前端框架对接

(l) PyTorch转换器

路径: `python/tvm/relay/frontend/pytorch.py**

def _hardswish():
    def _impl(inputs, input_types):
        return _op.hardswish(inputs[0])
    return _impl

关键文件路径总结

功能模块 关键路径
Relay核心注册 src/relay/op/nn/{nn.cc, activation.cc}
TOPI计算 {include,src}/topi/{nn.h, elemwise.cc}
Python接口 python/tvm/{relay/op/nn/_nn.py, topi/nn.py}
策略注册 python/tvm/relay/op/strategy/{generic.py, npu.py}
硬件后端 src/relay/backend/contrib/npu/
前端对接 python/tvm/relay/frontend/pytorch.py

开发流程示意图

Relay注册
TOPI实现
Python接口
硬件后端
前端框架

通过这种清晰的路径划分,TVM实现了:

  1. 模块化开发:各层级代码物理隔离
  2. 可扩展性:新增硬件只需在对应目录添加实现
  3. 维护性:相关功能的代码集中存放

网站公告

今日签到

点亮在社区的每一天
去签到