神經網絡實現連續型變量的迴歸預測(python)

最近寫論文時用到一個方法,是基於神經網絡的最優組合預測,主要思想如下:在建立由迴歸模型、灰色預測模型、BP神經網絡預測模型組成的組合預測模型庫的基礎上,利用以上三種單一預測模型的組合構成BP神經網絡組合預測模型。(我是參考的參考這篇文章:路玉龍,韓靖,餘思婧,張鴻雁.BP神經網絡組合預測在城市生活垃圾產量預測中應用)

我的目的

我需要用BP神經網絡來做連續預測。關於BP神經網絡的python實現網上有很多,但大多是用於分類決策,於是不得不搞清楚原理改代碼。
以下是我參考的一篇BP神經網絡的分類決策的實現(我的連續預測的代碼是基於下面這個鏈接改的,在此致謝一下):
https://www.cnblogs.com/Finley/p/5946000.html

修改思路

(1)最後一層不激活,直接輸出。或者說把激活函數看作f(x)=x
(2)損失函數函數改爲MSE

代碼

代碼中用兩個#——-圍起來的就是我更正的部分。

import math
import random

random.seed(0)
def rand(a, b):
    return (b - a) * random.random() + a

def make_matrix(m, n, fill=0.0):
    mat = []
    for i in range(m):
        mat.append([fill] * n)
    return mat

def sigmoid(x):
    return 1.0 / (1.0 + math.exp(-x))

def sigmoid_derivative(x):
    return x * (1 - x)

class BPNeuralNetwork:
    def __init__(self):
        self.input_n = 0
        self.hidden_n = 0
        self.output_n = 0
        self.input_cells = []
        self.hidden_cells = []
        self.output_cells = []
        self.input_weights = []
        self.output_weights = []
        self.input_correction = []
        self.output_correction = []

    def setup(self, ni, nh, no):
        self.input_n = ni + 1
        self.hidden_n = nh
        self.output_n = no
        # init cells
        self.input_cells = [1.0] * self.input_n
        self.hidden_cells = [1.0] * self.hidden_n
        self.output_cells = [1.0] * self.output_n
        # init weights
        self.input_weights = make_matrix(self.input_n, self.hidden_n)
        self.output_weights = make_matrix(self.hidden_n, self.output_n)
        # random activate
        for i in range(self.input_n):
            for h in range(self.hidden_n):
                self.input_weights[i][h] = rand(-0.2, 0.2)
        for h in range(self.hidden_n):
            for o in range(self.output_n):
                self.output_weights[h][o] = rand(-2.0, 2.0)
        # init correction matrix
        self.input_correction = make_matrix(self.input_n, self.hidden_n)
        self.output_correction = make_matrix(self.hidden_n, self.output_n)

    def predict(self, inputs):
        # activate input layer
        for i in range(self.input_n - 1):
            self.input_cells[i] = inputs[i]#輸入層輸出值
        # activate hidden layer
        for j in range(self.hidden_n):
            total = 0.0
            for i in range(self.input_n):
                total += self.input_cells[i] * self.input_weights[i][j]#隱藏層輸入值
            self.hidden_cells[j] = sigmoid(total)#隱藏層的輸出值
        # activate output layer
        for k in range(self.output_n):
            total = 0.0
            for j in range(self.hidden_n):
                total += self.hidden_cells[j] * self.output_weights[j][k]
                #-----------------------------------------------
            # self.output_cells[k] = sigmoid(total)
            self.output_cells[k] =total#輸出層的激勵函數是f(x)=x
 #-----------------------------------------------
        return self.output_cells[:]

    def back_propagate(self, case, label, learn, correct):#x,y,修改最大迭代次數, 學習率λ, 矯正率μ三個參數.
        # feed forward
        self.predict(case)
        # get output layer error
        output_deltas = [0.0] * self.output_n
        for o in range(self.output_n):
            error = label[o] - self.output_cells[o]
            #-----------------------------------------------
            # output_deltas[o] = sigmoid_derivative(self.output_cells[o]) * error
            output_deltas[o] = error
#-----------------------------------------------
        # get hidden layer error
        hidden_deltas = [0.0] * self.hidden_n
        for h in range(self.hidden_n):
            error = 0.0
            for o in range(self.output_n):
                error += output_deltas[o] * self.output_weights[h][o]
            hidden_deltas[h] = sigmoid_derivative(self.hidden_cells[h]) * error

        # update output weights
        for h in range(self.hidden_n):
            for o in range(self.output_n):
                change = output_deltas[o] * self.hidden_cells[h]
                self.output_weights[h][o] += learn * change + correct * self.output_correction[h][o]#??????????
                self.output_correction[h][o] = change

        # update input weights
        for i in range(self.input_n):
            for h in range(self.hidden_n):
                change = hidden_deltas[h] * self.input_cells[i]
                self.input_weights[i][h] += learn * change + correct * self.input_correction[i][h]
                self.input_correction[i][h] = change
        # get global error
        error = 0.0
        for o in range(len(label)):
            error += 0.5 * (label[o] - self.output_cells[o]) ** 2
        return error

    def train(self, cases, labels, limit=10000, learn=0.05, correct=0.1):
        for j in range(limit):
            error = 0.0
            for i in range(len(cases)):
                label = labels[i]
                case = cases[i]
                error += self.back_propagate(case, label, learn, correct)

    def test(self):
        cases = [
            [1, 2],
            [4, 4.5],
            [1, 0],
            [1, 1],
        ]
        labels = [[3], [9], [1], [2]]
        self.setup(2, 5, 1)
        self.train(cases, labels, 10000, 0.05, 0.1)
        for case in cases:
            print(self.predict(case))

if __name__ == '__main__':
    nn = BPNeuralNetwork()
    nn.test()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章