飛槳動態圖PyLayer機制

一、主要用法

如下是官方文檔上的使用樣例:

import paddle
from paddle.autograd import PyLayer

# Inherit from PyLayer
class cus_tanh(PyLayer):
    @staticmethod
    def forward(ctx, x, func1, func2=paddle.square):
        # ctx is a context object that store some objects for backward.
        ctx.func = func2
        y = func1(x)
        # Pass tensors to backward.
        ctx.save_for_backward(y)
        return y

    @staticmethod
    # forward has only one output, so there is only one gradient in the input of backward.
    def backward(ctx, dy):
        # Get the tensors passed by forward.
        y, = ctx.saved_tensor()
        grad = dy * (1 - ctx.func(y))
        # forward has only one input, so only one gradient tensor is returned.
        return grad

data = paddle.randn([2, 3], dtype="float64")
data.stop_gradient = False
z = cus_tanh.apply(data, func1=paddle.tanh)
z.mean().backward()

print(data.grad)

PyLayer 在使用上需要遵循一定的規範,如:

  • 子類必須包含靜態的 forward 和 backward 函數,它們的第一個參數必須是 PyLayerContext
  • 如果 backward 的某個返回值在 forward 中對應的 Tensor 是需要梯度,這個返回值必須爲 Tensor
  • backward 輸入的 Tensor 的數量必須等於 forward 輸出 Tensor 的數量
  • 如果你需在 backward 中使用 forward 的輸入 Tensor ,你可以將這些 Tensor 輸入到 PyLayerContextsave_for_backward 方法,之後在 backward 中使用這些 Tensor 。
  • backward 的輸出 Tensor 的個數等於 forward 輸入 Tensor 的個數

二、運行機制

2.1 端到端執行流程

以下圖爲例:

import paddle
from paddle.autograd import PyLayer

class Tanh(PyLayer):
    @staticmethod
    def forward(ctx, x):
        print("in forward")
        return x+x

    @staticmethod
    def backward(ctx, grad):
        print("in backwarad")
        return grad


x = paddle.ones([1], dtype="float64")
x.stop_gradient = False
out = Tanh.apply(x)[0]
print("after apply")
out.backward()
print("after backward")
print(x.grad)

執行如下命令可以看出執行的過程:GLOG_vmodule=eager_py_layer=6,py_layer_node=6 python test_pylayer.py,日誌如下:

[eager_py_layer.cc:132] Begin run PyLayer apply...
[eager_py_layer.cc:144] PyLayer construct PyLayerContext finish...
[eager_py_layer.cc:247] PyLayer forward args is ready, begin call user's forward function...
in forward
[eager_py_layer.cc:376] PyLayer forward function finish...
[eager_py_layer.cc:442] PyLayer construct backward node finish...
after apply
[py_layer_node.cc:38] Running Eager Backward Node: GradNodePyLayer_Tanh_backward
[py_layer_node.cc:98] PyLayer backward args is ready, begin call user's backward function...
in backwarad
[py_layer_node.cc:116] PyLayer backward function finish...
[py_layer_node.h:46] Do nothing here now
after backward

Tensor(shape=[1], dtype=float64, place=Place(gpu:0), stop_gradient=False,
       [1.])

2.2 代碼拆解

2.2.1 PyLayer 類

PyLayer 繼承自 core.eager.PyLayer,提供了兩個需要重寫的接口:

class EagerPyLayer(
        with_mateclass(EagerPyLayerMeta, core.eager.PyLayer,
                       EagerPyLayerContext)):

    @staticmethod
    def forward(ctx, *args, **kwargs):
         raise NotImplementedError(
            "You must implement the forward function for PyLayer.")

    @staticmethod
    def backward(ctx, *args):
         raise NotImplementedError(
            "You must implement the backward function for PyLayer.")

3.2.2 core.eager.PyLayer

C++ 端定義了一個 PyLayerObject 的結構體:

typedef struct {
  PyObject_HEAD PyObject* container;
  bool container_be_packed;
  std::shared_ptr<egr::UnPackHookBase> unpack_hook;
  PyObject* non_differentiable;
  PyObject* not_inplace_tensors;
  bool materialize_grads;
  std::vector<bool> forward_input_tensor_is_duplicable;
  std::vector<bool> forward_output_tensor_is_duplicable;
  std::weak_ptr<egr::GradNodePyLayer> grad_node;
} PyLayerObject;

藉助 BindEagerPyLayer() 函數實現了關鍵屬性、方法的bind:

void BindEagerPyLayer(PyObject* module) {
  auto heap_type = reinterpret_cast<PyHeapTypeObject*>(
      PyType_Type.tp_alloc(&PyType_Type, 0));
  heap_type->ht_name = ToPyObject("PyLayer");
  heap_type->ht_qualname = ToPyObject("PyLayer");
  auto type = &heap_type->ht_type;
  type->tp_name = "PyLayer";
  type->tp_basicsize = sizeof(PyLayerObject);
  type->tp_dealloc = (destructor)PyLayerDealloc;
  type->tp_methods = pylayer_methods;    // <----- 核心方法
  type->tp_getset = pylayer_properties;  // <----- 核心屬性
  type->tp_new = (newfunc)PyLayerNew;
  // 省略
}

後端綁定的只有兩個method,一個是 name() ,另一個是apply(self, *args, **kwargs) ,後者是PyLayer核心邏輯:

  • 首先會解析用戶傳入的args和kwargs參數
  • 然後調用用戶的forward函數:outputs = PyObject_Call(forward_fn, forward_args, kwargs);
  • 創建反向grad_node節點:GradNodePyLayer
  • 當用戶調用loss.backward()時,最終會執行GradNodePyLayer::operator()
    • 在C++端調用用戶的backward函數:auto outputs = PyObject_CallObject(backward_fn, backward_args);

對於反向GradNodePyLayer,其繼承自GradNodeBase,核心成員如下:

class GradNodePyLayer : public GradNodeBase {
 public:
  GradNodePyLayer(PyObject* ctx,
                  size_t bwd_in_slot_num,
                  size_t bwd_out_slot_num)
      : GradNodeBase(bwd_in_slot_num, bwd_out_slot_num) {
    ctx_ = ctx;
    Py_INCREF(ctx_);
  }
  
  private:
  PyObject* ctx_{nullptr};    //<----- 記錄了Pythond端的PyLayer對象,用於獲取backward函數指針
  std::vector<std::vector<phi::DenseTensorMeta>> forward_outputs_meta_;
  std::vector<std::vector<paddle::platform::Place>> forward_outputs_place_;
};

三、靜態圖

在靜態圖下,已知的是4年前實現的 py_func_op算子(框架中還有一個PyLayerOp,將在下文中闡述),OpMaker定義如下:

class PyFuncOpMaker : public framework::OpProtoAndCheckerMaker {
 public:
  void Make() override {
    AddInput("X", "Inputs of py_func op.").AsDuplicable();
    AddOutput("Out", "Outputs of py_func op").AsDuplicable();
    AddAttr<int>(kForwardPythonCallableId,
                 "Index of registered forward Python function.")
        .SetDefault(0);
    AddAttr<int>(kBackwardPythonCallableId,
                 "Index of registered backward Python function.")
        .SetDefault(-1);
    AddAttr<std::vector<std::string>>(kPyFuncBackwardSkipVars,
                                      "Unused forward in/out in backward op")
        .SetDefault(std::vector<std::string>());
    AddComment(R"DOC("PyFunc Op")DOC");
  }
};

3.1 主要用法

如下是單測裏的用法樣例:

def simple_fc_net(img, label, use_py_func_op):
    hidden = img
    for idx in range(4):
        hidden = fluid.layers.fc(
            hidden,
            size=200,
            bias_attr=fluid.ParamAttr(initializer=fluid.initializer.Constant(
                value=1.0)))
        if not use_py_func_op:
            hidden = fluid.layers.tanh(hidden)
        else:
            new_hidden = fluid.default_main_program().current_block(
            ).create_var(name='hidden_{}'.format(idx),
                         dtype='float32',
                         shape=hidden.shape)
            hidden = fluid.layers.py_func(func=tanh,    #<------ 前向函數
                                          x=hidden,
                                          out=new_hidden,
                                          backward_func=tanh_grad,  #<------ 反向函數
                                          skip_vars_in_backward_input=hidden)

    prediction = fluid.layers.fc(hidden, size=10, act='softmax')
    if not use_py_func_op:
        loss = fluid.layers.cross_entropy(input=prediction, label=label)
    else:
        loss = fluid.default_main_program().current_block().create_var(
            name='loss', dtype='float32', shape=[-1, 1])
        loss = fluid.layers.py_func(func=cross_entropy,       #<------ 前向函數
                                    x=[prediction, label],
                                    out=loss,
                                    backward_func=cross_entropy_grad,   #<------ 反向函數
                                    skip_vars_in_backward_input=loss)

        dummy_var = fluid.default_main_program().current_block().create_var(
            name='test_tmp_var', dtype='float32', shape=[1])
        fluid.layers.py_func(func=dummy_func_with_no_input,
                             x=None,
                             out=dummy_var)
        loss += dummy_var
        fluid.layers.py_func(func=dummy_func_with_no_output, x=loss, out=None)

        loss_out = fluid.default_main_program().current_block().create_var(
            dtype='float32', shape=[-1, 1])
        dummy_var_out = fluid.default_main_program().current_block().create_var(
            dtype='float32', shape=[1])
        fluid.layers.py_func(func=dummy_func_with_multi_input_output,
                             x=(loss, dummy_var),
                             out=(loss_out, dummy_var_out))
        assert loss == loss_out and dummy_var == dummy_var_out, \
            "py_func failed with multi input and output"

        fluid.layers.py_func(func=dummy_func_with_multi_input_output,
                             x=[loss, dummy_var],
                             out=[loss_out, dummy_var_out])
        assert loss == loss_out and dummy_var == dummy_var_out, \
            "py_func failed with multi input and output"

    loss = paddle.mean(loss)
    return loss

經過單測測試,靜態圖下基於py_func_op 訓練是支持的,也可以導出model文件。但加載時,會報錯。提示找不到函數的定義。

    InvalidArgumentError: Invalid python callable id 0, which should be less than 0.
      [Hint: Expected i < g_py_callables.size(), but received i:0 >= g_py_callables.size():0.] (at /workspace/paddle-fork/paddle/fluid/operators/py_func_op.cc:52)
      [operator < py_func > error]

從打印的program裏來看,是添加的py_func算子:

 {Out=['hidden_0']} = py_func(inputs={X=['fc_0.tmp_1']}, backward_callable_id = 1, backward_skip_vars = ['fc_0.tmp_1'], forward_callable_id = 0, op_device = , op_namescope = /, op_role = 0, op_role_var = [], with_quant_attr = False)
 {Out=['hidden_1']} = py_func(inputs={X=['fc_1.tmp_1']}, backward_callable_id = 3, backward_skip_vars = ['fc_1.tmp_1'], forward_callable_id = 2, op_device = , op_namescope = /, op_role = 0, op_role_var = [], with_quant_attr = False)
 {Out=['test_tmp_var']} = py_func(inputs={X=[]}, backward_callable_id = -1, backward_skip_vars = [], forward_callable_id = 10, op_device = , op_namescope = /, op_role = 0, op_role_var = [], with_quant_attr = False)

3.2 運行機制

fluid.layers.nn.py_func 定義如下:

def py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None):
    # 註冊前向函數
    fwd_func_id = PyFuncRegistry(func).id
    # 註冊反向函數,如有必要
    bwd_func_id = PyFuncRegistry(backward_func).id if backward_func is not None else -1
    
    helper.append_op(type='py_func',
                     inputs={'X': x},
                     outputs={'Out': out_list},
                     attrs={
                         'forward_callable_id': fwd_func_id,
                         'backward_callable_id': bwd_func_id,
                         'backward_skip_vars': list(backward_skip_vars)
                     })
    return out

可以看出前端API通過PyFuncRegistry來管理註冊前向、反向函數,生成唯一的id,然後傳遞給py_func 算子,如下是其實現:

class PyFuncRegistry(object):
    _register_funcs = []   # <--- 記錄所有的註冊函數
    
    def __init__(self, func):
        self._func = func
        self._id = core._append_python_callable_object_and_return_id(self)  # <--- 與C++端交互
        PyFuncRegistry._register_funcs.append(self)

其中core._append_python_callable_object_and_return_id是通過pybind綁定到AppendPythonCallableObjectAndReturnId函數,它唯一的作用就是記錄一個PyFuncRegistry對象(內部關聯對應的前向或反向函數)

static std::vector<py::object> g_py_callables;  // <---- 全局靜態變量

size_t AppendPythonCallableObjectAndReturnId(const py::object &py_obj) {
  g_py_callables.emplace_back(py_obj);
  return g_py_callables.size() - 1;
}

前述我們在加載離線導出的model時會報”找不到對應的函數“,本質上是因爲g_py_callables並沒有被序列化保存下來,因爲在執行py_func 算子時,會先去獲取對應的PyObject:

// Return py::object* instead of py::object
// Returning py::object would cause reference count increasing
// but without GIL, reference count in Python may not be safe
static py::object *GetPythonCallableObject(size_t i) {
  PADDLE_ENFORCE_LT(
      i,
      g_py_callables.size(),
      platform::errors::InvalidArgument(
          "Invalid python callable id %d, which should be less than %d.",
          i,
          g_py_callables.size()));
  return &g_py_callables[i];
}

從框架中py_func的實現來看,其具有如下幾個特點:

  1. 屬於無Kernel算子。因爲其直接繼承 OperatorBase,類似控制流算子。
  2. 需要額外的管理註冊函數的工具類,且目前不支持序列化。

四、PyLayer算子

除了上述靜態圖的py_func,一年之前框架中新增過一個py_layer_op,用戶支持動態圖Python端自定義OP的功能,相關背景描述見PR。
其算子描述爲:

class PyLayerOpMaker : public framework::OpProtoAndCheckerMaker {
 public:
  void Make() override {
    AddInput("X", "Inputs of PyLayer op.").AsDuplicable();
    AddOutput("Out", "Outputs of PyLayer op").AsDuplicable();
    AddComment(R"DOC("PyLayer Op")DOC");
  }
};

void PyLayerGradOpMaker<paddle::imperative::OpBase>::Apply(
    GradOpPtr<paddle::imperative::OpBase> grad_op) const {
  grad_op->SetType("py_layer");
  auto &inner_op = grad_op->InnerOp();
  auto py_layer_op_const = dynamic_cast<const PyLayerOp *>(&inner_op);

  if (py_layer_op_const) {
    auto py_layer_op = const_cast<PyLayerOp *>(py_layer_op_const);
    py_layer_op->SetPyLayerContext(py_context_);

  } else {
    PADDLE_THROW(platform::errors::Fatal(
        "PyLayerGradOpMaker can't cast %s to PyLayerOp*.",
        typeid(&inner_op).name()));
  }

  auto fwd_out_grads = this->OutputGrad("Out");
  using return_type = decltype(fwd_out_grads);
  return_type bwd_ins;

  bwd_ins.insert(bwd_ins.begin(), fwd_out_grads.begin(), fwd_out_grads.end());

  auto bwd_outs = this->InputGrad("X", false);

  grad_op->SetInput("X", bwd_ins);
  grad_op->SetOutput("Out", bwd_outs);
}

其kerenl實現:

template <typename DeviceContext, typename T>
class PyLayerOpKernel : public framework::OpKernel<T> {
 public:
  void Compute(const framework::ExecutionContext &ctx) const override {
    auto &op_ = ctx.GetOp();
    auto const_pylayer_op = dynamic_cast<const PyLayerOp *>(&op_);
    if (const_pylayer_op) {
      auto pylayer_op = const_cast<PyLayerOp *>(const_pylayer_op);

      // Release contex after executing the compute
      auto py_layer_context = pylayer_op->ReleasePyLayerContext();
      py::object bk_ctx(py::handle(py_layer_context->GetMutableCtx()), true);
      auto &input_vars = ctx.MultiInputVar("X");
      auto output_vars = ctx.MultiOutputVar("Out");
      RunPyObject(&bk_ctx, input_vars, &output_vars);

    } else {
      PADDLE_THROW(platform::errors::Fatal(
          "PyLayerOpKernel can't cast %s to PyLayer*.", typeid(&op_).name()));
    }
  }
};

需要特別注意的是,此處的PyLayerOpKernel其實只負責執行反向,並不負責執行前向:

  • 動態圖下,前向是通過Apply()函數觸發的,這個是通過pybind來實現的,見py_layer_fwd.h中的定義
  • 在執行完反向之後,會調用一個CreateGradOpNode函數,創建一個py_layer算子,負責執行反向
    • RunPyObject函數中始終是獲取auto py_function = py_object->attr("backward");這也是爲什麼PyLayerOpKernel只負責執行反向的原因。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章