在keras中有batch_dot函数,用于计算两个多维矩阵,官方注释如下:
def batch_dot(x, y, axes=None):
"""Batchwise dot product.
`batch_dot` is used to compute dot product of `x` and `y` when
`x` and `y` are data in batches, i.e. in a shape of
`(batch_size, :)`.
`batch_dot` results in a tensor or variable with less dimensions
than the input. If the number of dimensions is reduced to 1,
we use `expand_dims` to make sure that ndim is at least 2.
这个函数是用于计算批次数据‘x’和‘y'的内积,两个数据的batch_size必须相同。
函数的输出张量的维度数量会少于输入的维度数量和。如果输出的维度数量减少到1,就会使用
’expand_dim‘函数来确保维度数量至少为2。
# Arguments
x: Keras tensor or variable with `ndim >= 2`. 维度数量 >= 2
y: Keras tensor or variable with `ndim >= 2`. 维度数量 >= 2
axes: int or tuple(int, int). Target dimensions to be reduced.
要减少的目标维度。理论上从0开始(即shape首位),但batch_size是忽略的,故从1开始。若是一
个整数,则表示两个输入的shape的同一位。若是一个tuple或list,则分别指向不同位置。
注意:无论axes是那种类型,指向的两个位置上的数值必须一致。
# Returns
A tensor with shape equal to the concatenation of `x`'s shape
(less the dimension that was summed over) and `y`'s shape
(less the batch dimension and the dimension that was summed over).
If the final rank is 1, we reshape it to `(batch_size, 1)`.
"""
下面对例子进行说明。
>>> x_batch = K.ones(shape=(32, 20, 1))
>>> y_batch = K.ones(shape=(32, 30, 20))
>>> xy_batch_dot = K.batch_dot(x_batch, y_batch, axes=(1, 2))
>>> K.int_shape(xy_batch_dot)
(32, 1, 30)
首先我认为,该函数进行还是普通的矩阵乘法,但是两个输入矩阵的格式明显不符合,所以进行了类似reshape的操作,具体就是将左边的矩阵的目标位移动到末尾,将右边矩阵的目标位移动到首位。如上例,去掉batch_size,原规格为(20,1)和(30,20),因为axes=(1,2),故变为(1,20)和(20,30)的矩阵乘法,结果为(1,30),加上batch_size即为(32,1,30)。