Study notes for Backpropagation

It can be used as a linear and nonlinear classifier as well as a non-linear regression (i.e. for numeric prediction). Multilayer feed-forward networks, given enough hidden units and enough training samples,can closely approximate any function.

1. Advantages

  1. High tolerance of noisy data, ability to classify patterns on which neural network has not been trained.
  2. They can be used when you have little knowledge of the relationships between attributes and classes.
  3. Well suited for continuous-valued inputs and outputs.
  4. Neural network algorithms are inherently parallel; parallelization techniques can be used to speed up the computation.

2. Disadvantages

  1. Long training time;
  2. Requires a number of parameters that are typically best determined empirically such as the network topology or "structure";
  3. Poor interpretability of the symbolic meaning behind the learned weights and of "hidden units", some solutions have been proposed for this issue, including extracting rules from networks and sensitivity analysis.

3. Defining a network topology

  1. Normalizing the input values for each attribute to [0, 1] measured in the training tuples will speed up the learning phase.
  2. One output unit may be used to represent two classes (where the value 1 represents one class, and the value 0 represents the other), then output values greater than or equal to 0.5 may be considered as belonging to the positive class, while values less than 0.5 may be considered negative. If there are more than two classes, then one output unit per class is used, then the output node with the highest value determines the predicted class label for the input values.
  3. There are no clear rules as to the "best" number of hidden layer units. Someone indicates that the proper number would be 2.5 times of input units, or squared number of input units. Andrew Ng states usually the more hidden nodes the better. In a nutshell, it is always a trial-and-error process.
  4. The learning rate helps avoid getting stuck at a local minimum in decision space (i.e. where the weights appear to converge, but are not the optimum solution) and encourages finding the global minimum. If the learning rate is too small, then learning will occur at a very slow pace. If the learning rate is too large, then oscillation between inadequate solutions may occur. A rule of thumb is to set the learning rate to 1/t, where t is the number of iterations through the training set so far.

4. Further Reading

  1. Rachel_zhang's lecture notes
  2. Backpropagation math
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章