tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)
根據4維的input和filter張量計算2維卷積
input張量[batch, in_height, in_width, in_channels]
filter/kernel 張量[filter_height, filter_width, in_channels, out_channels]
操作按照下面執行:
1. 將4維的filter變成2維 [filter_height * filter_width * in_channels, output_channels]
2. 從圖像receptive field中抽取形成一個虛擬tensor [batch, out_heigt, out_width, filter_height*filter_width*in_channels]
3.
strides[0]=strides[3]=1. 對於大多數實例水平和垂直stride相同的情況, strides=[1, stride, stride, 1]
tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)
在input上執行max pooling
- value: 4維張量 [batch, height, width, channels] type tf.float32
- ksize: 窗口大小。
- strides: 滑動窗口大小