卷積神經網絡Quiz1

Question 1
What do you think applying this filter to a grayscale image will do?
這裏寫圖片描述

Detect horizontal edges

Detect 45 degree edges

Detect vertical edges

Detect image contrast

解析:比如畫一個圖形如下,大小爲128*128。
這裏寫圖片描述
用上述卷積核進行卷積,得到的結果如下。
這裏寫圖片描述
可以看出來是一個垂直方向的邊緣檢測器


Question 2
Suppose your input is a 300 by 300 color (RGB) image, and you are not using a convolutional network. If the first hidden layer has 100 neurons, each one fully connected to the input, how many parameters does this hidden layer have (including the bias parameters)?

9,000,001

9,000,100

27,000,001

27,000,100

解析:W的參數爲[nl,nl-1],即爲[100, 300*300*3]爲27000000,b的參數爲[nl,1]爲100,所以共有27,000,100


Question 3
Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with 100 filters that are each 5x5. How many parameters does this hidden layer have (including the bias parameters)?

2501

2600

7500

7600

解析:
這裏寫圖片描述
大小爲(5*5+1)*100 = 2600


Question 4
You have an input volume that is 63x63x16, and convolve it with 32 filters that are each 7x7, using a stride of 2 and no padding. What is the output volume?

16x16x32

29x29x32

29x29x16

16x16x16
解析:(n-f)/s + 1 = (63-7)/2 + 1 = 29


Question 5
You have an input volume that is 15x15x8, and pad it using “pad=2.” What is the dimension of the resulting volume (after padding)?

19x19x8

17x17x10

17x17x8

19x19x12
解析:n+2p = 15 + 2*2 = 19


Question 6
You have an input volume that is 63x63x16, and convolve it with 32 filters that are each 7x7, and stride of 1. You want to use a “same” convolution. What is the padding?

1

2

3

7

解析:(n+2p-f)/s + 1 = n,即(63-7+2p)/1+1= 63,所以p=3


Question 7
You have an input volume that is 32x32x16, and apply max pooling with a stride of 2 and a filter size of 2. What is the output volume?

16x16x16

15x15x16

32x32x8

16x16x8
解析: (n+2p-f)/s +1= (32-2)/2+1=16


Question 8
Because pooling layers do not have parameters, they do not affect the backpropagation (derivatives) calculation.

True

False
解析: Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer.


Question 9
In lecture we talked about “parameter sharing” as a benefit of using convolutional networks. Which of the following statements about parameter sharing in ConvNets are true? (Check all that apply.)

It allows gradient descent to set many of the parameters to zero, thus making the connections sparse.

It reduces the total number of parameters, thus reducing overfitting.

It allows a feature detector to be used in multiple locations throughout the whole input image/input volume.

It allows parameters learned for one task to be shared even for a different task (transfer learning).

解析:
這裏寫圖片描述


Question 10
In lecture we talked about “sparsity of connections” as a benefit of using convolutional layers. What does this mean?

Each layer in a convolutional network is connected only to two other layers

Each filter is connected to every channel in the previous layer.

Each activation in the next layer depends on only a small number of activations from the previous layer.

Regularization causes gradient descent to set many of the parameters to zero.

解析:參考9


參考:http://blog.csdn.net/koala_tree/article/details/78458067

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章