2020-5-31 吴恩达-NN&DL-w2 NN基础(课后作业)

参考https://zhuanlan.zhihu.com/p/31268885

在这里插入图片描述
1、What does a neuron compute?

  • A neuron computes an activation function followed by a linear function (z = Wx + b)
  • A neuron computes a linear function (z = Wx + b) followed by an activation function
  • A neuron computes a function g that scales the input x linearly (Wx + b)
  • A neuron computes the mean of all features before applying the output to an activation function

1、神经元计算什么?

  • 神经元先计算激活函数,再计算线性函数(z = Wx + b)
  • 神经元先计算线性函数(z = Wx + b),再计算激活函数。(正确)
  • 神经元计算函数g,函数g计算(Wx + b)。
  • 在将输出应用于激活函数之前,神经元计算所有特征的平均值

===============================================================

2、Which of these is the “Logistic Loss”?以下哪个是逻辑回归损失函数?

  • L(i)(y^(i),y(i))=y(i)y^(i)L^{(i)}(\hat y^{(i)},y^{(i)})=|y^{(i)}-\hat y^{(i)}|
  • L(i)(y^(i),y(i))=max(0,y(i)y^(i))L^{(i)}(\hat y^{(i)},y^{(i)})=max(0,y^{(i)}-\hat y^{(i)})
  • L(i)(y^(i),y(i))=(y(i)log(y^(i))+(1+y(i))log(1y^(i)))L^{(i)}(\hat y^{(i)},y^{(i)})=-(y^{(i)}log(\hat y^{(i)})+(1+y^{(i)})log(1-\hat y^{(i)}))。(正确)
  • L(i)(y^(i),y(i))=y(i)y^(i)2L^{(i)}(\hat y^{(i)},y^{(i)})=|y^{(i)}-\hat y^{(i)}|^2

参见2.3 logistic 回归损失函数 。采用平方误差可能导致优化非凸(局部最优,不是全局最优),而上面定义的损失函数可以得到全局最优结果。

===============================================================

3、Suppose img is a (32,32,3) array, representing a 32x32 image with 3 color channels red, green and blue. How do you reshape this into a column vector?

假设img是一个(32,32,3)数组,具有3个颜色通道:红色、绿色和蓝色的32x32像素的图像。 如何将其转换为列向量?

  • x = img.reshape((1,32 * 32 * 3))
  • x = img.reshape((32 * 32 , 3))
  • x = img.reshape((32 * 32 * 3, 1))。正确
  • x = img.reshape((3,32 * 32 ))

参见2.16 关于 python / numpy 向量的说明

===============================================================

4、Consider the two following random arrays “a” and “b”: 有2个随机数组a和b

a = np.random.randn(2, 3) # a.shape = (2, 3)
b = np.random.randn(2, 1) # b.shape = (2, 1)
c = a + b

What will be the shape of “c”? 请问数组c的维度是怎么样?

  • c.shape = (3, 2)
  • c.shape = (2, 1)
  • c.shape = (2, 3)。(正确)
  • The computation cannot happen because the sizes don’t match.It’s going to be “Error”!

参见2.15 Python 中的广播
B(列向量)复制3次,然后和A的每一列相加。

===============================================================

5、Consider the two following random arrays “a” and “b”:有2个随机数组a和b

a = np.random.randn(4, 3) # a.shape = (4, 3)
b = np.random.randn(3, 2) # b.shape = (3, 2)
c = a * b

What will be the shape of “c”? 请问数组c的维度是怎么样?

  • The computation cannot happen because the sizes don’t match.It’s going to be “Error”!(正确)
  • c.shape = (4, 3)
  • c.shape = (3, 3)
  • c.shape = (4, 2)

数组按照元素相乘需要两个矩阵之间的维数相同,但是a和b维度不同,所以这将报错,无法计算。

===============================================================

6、Suppose you have n_x input features per example. Recall that X=[x(1),x(2)x(m)]X=[x^{(1)}, x^{(2)}…x^{(m)}]. What is the dimension of X?
假设你的每一个实例有nxn_x个输入特征,想一下在X=[x(1),x(2)x(m)]X=[x^{(1)}, x^{(2)}…x^{(m)}]中,X的维度是多少?

  • (m,1)
  • (m,nxn_x)
  • (nxn_x,m)。(正确)
  • (1,m)

m个x向量横向堆叠。x是包含nxn_x个元素的列向量。

===============================================================

7、Recall that np.dot(a,b) performs a matrix multiplication on a and b, whereas a*b performs an element-wise multiplication.
回想一下,np.dot(a,b)在a和b上执行矩阵乘法,而`a * b’执行元素方式的乘法。

Consider the two following random arrays “a” and “b”:
有2个随机数组“a”和“b”:

a = np.random.randn(12288, 150) # a.shape = (12288, 150)
b = np.random.randn(150, 45) # b.shape = (150, 45)
c = np.dot(a, b)

What is the shape of c? 请问数组c的维度是怎么样?

  • c.shape = (150, 150)
  • The computation cannot happen because the sizes don’t match.It’s going to be “Error”!
  • c.shape = (12288, 150)
  • c.shape = (12288, 45)。(正确)

矩阵乘法,没什么好说的。

===============================================================

8、Consider the following code snippet: 观察下面代码

# a.shape = (3,4)


# b.shape = (4,1)

for i in range(3):
  for j in range(4):
    c[i][j] = a[i][j] + b[j]

How do you vectorize this? 如何向量化?

  • c = a + b.T。(正确)
  • c = a.T + b
  • c = a + b
  • c = a.T + b.T

c的维度(3,4),a维度无需转置,b需要转置并广播。

===============================================================

9、Consider the following code: 观察下面代码

a = np.random.randn(3, 3)
b = np.random.randn(3, 1)
c = a * b

What will be c?

  • This will invoke broadcasting, so b is copied three times to become (3,3), and * is an element-wise product so c.shape = (3, 3).(正确)
  • This will invoke broadcasting, so b is copied three times to become (3,3), and * invokes a matrix multiplication operation of 3x3 matrices so c will be (3, 3).
  • This will multiply a 3x3 matrix a with a 3x1 vector, thus resulting in a 3x1 vector. That is, c.shape= (3, 1).
  • It will lead to an error since you cannot use “*” to operate on these two matrices. You need to instead use np.dot(a,b).

数组按照元素相乘。使用广播机制,b会被复制三次。

===============================================================

10、Consider the following computation graph. 观察下面计算图

在这里插入图片描述

What is the output J? J是什么?

  • J= (c - 1) * (b + a)
  • J= (a - 1) * (b + c)。(正确)
  • J= a * b + b * c + a * c
  • J= (b - 1) * (c + a)

推导过程如下

J = u + v - w
  = a * b + a * c - (b + c)
  = a * (b + c) - (b + c)
  = (a - 1) * (b + c)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章