隱馬爾科夫模型的例子實現--(2)

4.例子

4.1.解釋

隱藏狀態和觀察狀態的概率轉移如下:
在這裏插入圖片描述

求觀測狀態爲normal,cold,dizzy時的隱藏狀態。

第一步:
在這裏插入圖片描述

P(“cold”|newState):隱藏狀態下觀察狀態的概率,隱藏狀態到觀測狀態的發射概率

P_start(state)P_obs(normal)=P_start(Healthy)P(normal)==P_start(Healthy)(normalhealthy)0.60.5=0.3P\_start(state)*P\_obs(“normal”)=P\_start(Healthy)*P(“normal”)=\\ =P\_start(Healthy)*(“normal”|healthy)\\ 0.6*0.5=0.3
P_start(state)P_obs(normal)=P_start(Fever)P(normal)=P_start(Healthy)P(normalFever)=0.40.1=0.04 P\_start(state)*P\_obs(“normal”)=P\_start(Fever)*P(“normal”)\\ =P\_start(Healthy)*P(“normal”|Fever)\\ =0.4*0.1=0.04

第二步
在這裏插入圖片描述
P(oldState):上一步的結果
P_trans(oldState-newState):舊的隱藏狀態到新的隱藏狀態的概率
P(“cold”|newState):新的隱藏狀態下觀察狀態的概率,隱藏狀態到觀測狀態的發射概率
P(oldState)P_trans(oldState>newState)P(coldnewState)=P(Healthy)P_trans(Healthy>Healthy)P(coldHealthy)=0.30.70.4=0.084 P(oldState)*P\_trans(oldState->newState)*P(“cold”|newState)\\ =P(Healthy)*P\_trans(Healthy->Healthy)*P(“cold”|Healthy)\\ =0.3*0.7*0.4=0.084\\

P(Healthy)P_trans(Healthy>Fever)P(coldFever)=0.30.30.3=0.027 P(Healthy)*P\_trans(Healthy->Fever)*P(“cold”|Fever)\\ =0.3*0.3*0.3=0.027

P(Fever)P_trans(Fever>Fever)P(coldFever)=0.040.60.3=0.0072 P(Fever)*P\_trans(Fever->Fever)*P(“cold”|Fever)\\ =0.04*0.6*0.3=0.0072

P(Fever)P_trans(Fever>Healthy)P(coldHealthy)=0.040.40.4=0.0064 P(Fever)*P\_trans(Fever->Healthy)*P(“cold”|Healthy)\\ =0.04*0.4*0.4=0.0064

第三步
在這裏插入圖片描述
選擇概率最高的路徑。
對於每個healthy和fever的隱藏狀態之間連線,選擇概率最高的。
P(H->H)>P(F->H),P(H|Day2)=0.084
P(H->F)>P(F->F),P(F|Day2)=0.027

第四步
在這裏插入圖片描述
P(oldState)P_trans(oldState>newState)P(dizzynewState)=P(Healthy)P_trans(Healthy>Healthy)P(dizzyHealthy)=0.0840.70.1=0.00588 P(oldState)*P\_trans(oldState->newState)*P(“dizzy”|newState)\\ =P(Healthy)*P\_trans(Healthy->Healthy)*P(“dizzy”|Healthy)\\ =0.084*0.7*0.1=0.00588

P(oldState)P_trans(oldState>newState)P(dizzynewState)=P(Healthy)P_trans(Healthy>Fever)P(dizzyFever)=0.0840.30.6=0.01512 P(oldState)*P\_trans(oldState->newState)*P(“dizzy”|newState)\\ =P(Healthy)*P\_trans(Healthy->Fever)*P(“dizzy”|Fever)\\ =0.084*0.3*0.6=0.01512

P(oldState)P_trans(oldState>newState)P(dizzynewState)=P(Fever)P_trans(Fever>Healthy)P(dizzyHealthy)=0.0270.40.1=0.00108 P(oldState)*P\_trans(oldState->newState)*P(“dizzy”|newState)\\ =P(Fever)*P\_trans(Fever->Healthy)*P(“dizzy”|Healthy)\\ =0.027*0.4*0.1=0.00108

P(oldState)P_trans(oldState>newState)P(dizzynewState)=P(Fever)P_trans(Fever>Fever)P(dizzyFever)=0.0270.60.6=0.00972 P(oldState)*P\_trans(oldState->newState)*P(“dizzy”|newState)\\ =P(Fever)*P\_trans(Fever->Fever)*P(“dizzy”|Fever)\\ =0.027*0.6*0.6=0.00972

第五步

在這裏插入圖片描述
對於每個healthy和fever的隱藏狀態之間連線,選擇概率最高的。

P(H->H)>P(F->H),P(H|Day2)=0.00588
P(H->F)>P(F->F),P(F|Day2)=0.01512

最終的結果是:
P(start)->P(H)->P(H)->P(F)

4.2.代碼

# coding: utf-8

states = ('Healthy', 'Fever')

observations = ('normal', 'cold', 'dizzy')

start_probability = {'Healthy': 0.6, 'Fever': 0.4}

transition_probability = {
    'Healthy': {'Healthy': 0.7, 'Fever': 0.3},
    'Fever': {'Healthy': 0.4, 'Fever': 0.6},
}

emission_probability = {
    'Healthy': {'normal': 0.5, 'cold': 0.4, 'dizzy': 0.1},
    'Fever': {'normal': 0.1, 'cold': 0.3, 'dizzy': 0.6},
}

def print_dptable(V):
    print('節點的概率')
    print("      ",end='')
    for i in range(len(V)):
        print("%7d" % i,end='')
    print()
    for y in V[0].keys():
        print("%.5s: " % y,' ',end='')

        for t in range(len(V)):
            print("%.7s" % ("%f" % V[t][y]),' ',end='')
        print()
    print()

def viterbi(obs, states, start_p, trans_p, emit_p):
    """
    viterbi算法
    :param obs: ('normal', 'cold', 'dizzy')
    :param states: ('Healthy', 'Fever')
    :param start_p: {'Healthy': 0.6, 'Fever': 0.4}
    :param trans_p: {
    'Healthy': {'Healthy': 0.7, 'Fever': 0.3},
    'Fever': {'Healthy': 0.4, 'Fever': 0.6},
    }
    :param emit_p: {
    'Healthy': {'normal': 0.5, 'cold': 0.4, 'dizzy': 0.1},
    'Fever': {'normal': 0.1, 'cold': 0.3, 'dizzy': 0.6},
    }
    :return:
    """
    V = [{}]  # 節點的概率
    path = {}  # key:當前節點的隱藏狀態,value:前面所有節點的最好隱藏狀態

    # obs只有一個的情況
    for y in states:  # ('Healthy', 'Fever')
        V[0][y] = start_p[y] * emit_p[y][obs[0]]
        path[y] = [y]

    # obs大於一個的情況
    for t in range(1, len(obs)):  # 每個觀測狀態的計算
        V.append({})
        newpath = {}
        print('path', path)
        for y in states:  # 計算每個隱藏狀態下的概率
            # 前一個隱藏狀態的節點概率*前一個隱藏狀態到新隱藏狀態的轉移概率*發射概率,舊隱藏狀態
            # 計算所有前一個隱藏狀態的所有情況
            # 保留前一個隱藏狀態下到此隱藏狀態下最大的概率,舊隱藏狀態
            (prob, state) = max([(V[t - 1][y0] * trans_p[y0][y] * emit_p[y][obs[t]], y0) for y0 in states])
            # print(prob, state)
            V[t][y] = prob  # 保存此觀測狀態下和隱藏狀態下的節點概率
            newpath[y] = path[state] + [y]
            print('newpath', newpath)

        # Don't need to remember the old paths
        path = newpath

    print_dptable(V)
    # 找到最後一個狀態的最大的節點的概率的隱藏狀態
    (prob, state) = max([(V[len(obs) - 1][y], y) for y in states])
    return (prob, path[state])

def example():
    return viterbi(observations,
                   states,
                   start_probability,
                   transition_probability,
                   emission_probability)
if __name__ == '__main__':
    print(example())

path {‘Healthy’: [‘Healthy’], ‘Fever’: [‘Fever’]}
newpath {‘Healthy’: [‘Healthy’, ‘Healthy’]}
newpath {‘Healthy’: [‘Healthy’, ‘Healthy’], ‘Fever’: [‘Healthy’, ‘Fever’]}
path {‘Healthy’: [‘Healthy’, ‘Healthy’], ‘Fever’: [‘Healthy’, ‘Fever’]}
newpath {‘Healthy’: [‘Healthy’, ‘Healthy’, ‘Healthy’]}
newpath {‘Healthy’: [‘Healthy’, ‘Healthy’, ‘Healthy’], ‘Fever’: [‘Healthy’, ‘Healthy’, ‘Fever’]}
節點的概率
0 1 2
Healt: 0.30000 0.08400 0.00588
Fever: 0.04000 0.02700 0.01512

(0.01512, [‘Healthy’, ‘Healthy’, ‘Fever’])

5.中文分詞

5.1.解釋

下面結合中文分詞來說明HMM,HMM的典型介紹就是這個模型是一個五元組:

  • StatusSet: 狀態值集合(隱狀態)
  • ObservedSet: 觀察值集合(輸出文字集合)
  • TransProbMatrix: 轉移概率矩陣(隱狀態)
  • EmitProbMatrix: 發射概率矩陣(隱狀態表現爲顯狀態的概率)
  • InitStatus: 初始狀態概率(隱狀態)

結合參數說明HMM解決的三種問題:
(1)參數(StatusSet, TransProbMatrix, EmitRobMatrix, InitStatus)已知的情況下,求解觀察值序列。(Forward-backward算法)
(2)參數(ObservedSet, TransProbMatrix, EmitRobMatrix, InitStatus)已知的情況下,求解狀態值序列。(Viterbi算法)
(3)參數(ObservedSet)已知的情況下,求解(TransProbMatrix, EmitRobMatrix, InitStatus)。(Baum-Welch算法)

其中,第二種問題是Viterbi算法求解狀態值序列最常用,語音識別、中文分詞、新詞發現、詞性標註都有它的一席之地。
五元組參數在中文分詞中的具體含義,直接給五元組參數賦予具體含義:
StatusSet & ObservedSet
狀態值集合爲(B, M, E, S): {B:begin, M:middle, E:end, S:single}
分別代表每個狀態代表的是該字在詞語中的位置,B代表該字是詞語中的起始字,M代表是詞語中的中間字,E代表是詞語中的結束字,S則代表是單字成詞。

觀察值集合爲就是所有漢字字符串(“小明碩士畢業於中國科學院計算所”),甚至包括標點符號所組成的集合。
狀態值也就是我們要求的值,在HMM模型中文分詞中,我們的輸入是一個句子(也就是觀察值序列),輸出是這個句子中每個字的狀態值。
舉個栗子:

小明碩士畢業於中國科學院計算所
輸出的狀態序列爲:
BEBEBMEBEBMEBES
根據這個狀態序列我們可以進行切詞:
BE/BE/BME/BE/BME/BE/S
所以切詞結果如下:
小明/碩士/畢業於/中國/科學院/計算/所

例子:小明碩士畢業於中國科學院計算所
在這裏插入圖片描述

同時我們可以注意到:
B後面只可能接(M or E),不可能接(B or S)。而M後面也只可能接(M or E),不可能接(B, S)。
沒錯,就是這麼簡單,現在輸入輸出都明確了,下文講講輸入和輸出之間的具體過程,裏面究竟發生了什麼不可告人的祕密?
上面只介紹了五元組中的兩元【StatusSet, ObservedSet】,下面介紹剩下的三元【InitStatus, TransProbMatrix, EmitProbMatrix】。
這五元的關係是通過一個叫Viterbi的算法串接起來, ObservedSet序列值是Viterbi的輸入, 而StatusSet序列值是Viterbi的輸出, 輸入和輸出之間Viterbi算法還需要藉助三個模型參數,分別是InitStatus, TransProbMatrix, EmitProbMatrix, 接下來一一講解

5.1.1.InitStatus

初始狀態概率分佈是最好理解的,可以示例如下:
#B
-0.26268660809250016
#E
-3.14e+100
#M
-3.14e+100
#S
-1.4652633398537678

PS:示例數值是對概率值取對數之後的結果(可以讓概率相乘的計算變成對數相加),其中-3.14e+100作爲負無窮,也就是對應的概率值是0。下同。
也就是句子的第一個字屬於{B,E,M,S}這四種狀態的概率,如上可以看出,E和M的概率都是0,這和實際相符合,開頭的第一個字只可能是詞語的首字(B),或者是單字成詞(S)。

5.1.2.TransProbMatrix

轉移概率是馬爾科夫鏈很重要的一個知識點,大學裏面學過概率論的人都知道,馬爾科夫鏈最大的特點就是當前T=i時刻的狀態Status(i),只和T=i時刻之前的n個狀態有關。也就是:
{Status(i-1), Status(i-2), Status(i-3), … Status(i - n)}
更進一步的說,HMM模型有三個基本假設作爲模型的前提,其中有個“有限歷史性假設”,也就是馬爾科夫鏈的n=1。即Status(i)只和Status(i-1)相關,這個假設能大大簡化問題。
回過頭看TransProbMatrix,其實就是一個4x4(4就是狀態值集合的大小)的二維矩陣,示例如下:
矩陣的橫座標和縱座標順序是BEMS x BEMS。(數值是概率求對數後的值,別忘了。)

在這裏插入圖片描述

比如TransProbMatrix[0][0]代表的含義就是從狀態B轉移到狀態B的概率,由TransProbMatrix[0][0] = -3.14e+100可知,這個轉移概率是0,這符合常理。由狀態各自的含義可知,狀態B的下一個狀態只可能是ME,不可能是BS,所以不可能的轉移對應的概率都是0,也就是對數值負無窮,在此記爲-3.14e+100。
由上TransProbMatrix矩陣可知,對於各個狀態可能轉移的下一狀態,且轉移概率對應如下:

#B
#E:-0.510825623765990,M:-0.916290731874155
#E
#B:-0.5897149736854513,S:-0.8085250474669937
#M
#E:-0.33344856811948514,M:-1.2603623820268226
#S
#B:-0.7211965654669841,S:-0.6658631448798212

5.1.3.EmitProbMatrix

這裏的發射概率(EmitProb)其實也是一個條件概率而已,根據HMM模型三個基本假設裏的“觀察值獨立性假設”,觀察值只取決於當前狀態值,也就是:
P(Observed[i], Status[j]) = P(Status[j]) * P(Observed[i]|Status[j])
其中P(Observed[i]|Status[j])這個值就是從EmitProbMatrix中獲取。
EmitProbMatrix示例如下:

#B
耀:-10.460283,涉:-8.766406,談:-8.039065,伊:-7.682602,洞:-8.668696,…
#E
耀:-9.266706,涉:-9.096474,談:-8.435707,伊:-10.223786,洞:-8.366213,…
#M
耀:-8.47651,涉:-10.560093,談:-8.345223,伊:-8.021847,洞:-9.547990,…
#S
蘄:-10.005820,涉:-10.523076,唎:-15.269250,禑:-17.215160,洞:-8.369527…

5.2.訓練HMM模型

根據分好的語料庫,獲得發射矩陣,開始矩陣和轉移矩陣。
emit
{‘B’: {‘中’: 0.009226290787680802, ‘兒’: 0.00033344568220249873, ‘踏’: 4.393128858391452e-05, 。。。。

trans
{‘B’: {‘B’: 0.0, ‘M’: 0.1167175117318146, ‘E’: 0.8832824882681853, ‘S’: 0.0}, ‘M’: {‘B’: 0.0, ‘M’: 0.2777743117140081, ‘E’: 0.7222256882859919, ‘S’: 0.0}, ‘E’: {‘B’: 0.46893265693552616, ‘M’: 0.0, ‘E’: 0.0, ‘S’: 0.5310673430644739}, ‘S’: {‘B’: 0.3503213832274479, ‘M’: 0.0, ‘E’: 0.0, ‘S’: 0.46460125869921165}}

start
{‘B’: 0.5820149148537713, ‘M’: 0.0, ‘E’: 0.0, ‘S’: 0.41798844132394497}
Pi_dic #
{‘B’: 0.5820149148537713, ‘M’: 0.0, ‘E’: 0.0, ‘S’: 0.41798844132394497}

Count_dic
{‘B’: 1388532, ‘M’: 224398, ‘E’: 1388532, ‘S’: 1609916}

#-*-coding:utf-8
import sys

import codecs
import numpy as np
state_M = 4
word_N = 0

trans_dic = {}
emit_dic = {}
Count_dic = {}  # 隱藏狀態的個數
Pi_dic = {}  # 位於詞首的概率
word_set = set()
state_list = ['B','M','E','S']
line_num = -1

INPUT_DATA = "RenMinData.txt_utf8"
PROB_START = "prob_start.npy"  #
PROB_EMIT = "prob_emit.npy"  # 發射概率 # B->我# {'B':{'我':0.34, '門':0.54},'S':{'我':0.34, '門':0.54}}
PROB_TRANS = "prob_trans.npy"  # 轉移概率 BMES*BMES
# PROB_START {'B': 0.5820149148537713, 'M': 0.0, 'E': 0.0, 'S': 0.41798844132394497}

def init():
    global state_M
    global word_N
    for state in state_list:
        trans_dic[state] = {}
        for state1 in state_list:
            trans_dic[state][state1] = 0.0
    for state in state_list:
        Pi_dic[state] = 0.0
        emit_dic[state] = {}
        Count_dic[state] = 0

def getList(input_str):
    """

    :param input_str: 過年,一個詞,
    :return: BE,BMMMS,S,BMMS
    """
    outpout_str = []
    if len(input_str) == 1:
        outpout_str.append('S')
    elif len(input_str) == 2:
        outpout_str = ['B','E']
    else:
        M_num = len(input_str) -2
        M_list = ['M'] * M_num
        outpout_str.append('B')
        outpout_str.extend(M_list)
        outpout_str.append('S')
    return outpout_str

def Output():
    # start_fp = codecs.open(PROB_START,'a', 'utf-8')
    # emit_fp = codecs.open(PROB_EMIT,'a', 'utf-8')
    # trans_fp = codecs.open(PROB_TRANS,'a', 'utf-8')

    print ("len(word_set) = %s " % (len(word_set)))
    for key in Pi_dic.keys():
        '''
        if Pi_dic[key] != 0:
            Pi_dic[key] = -1*math.log(Pi_dic[key] * 1.0 / line_num)
        else:
            Pi_dic[key] = 0
        '''
        Pi_dic[key] = Pi_dic[key] * 1.0 / line_num
    # print >>start_fp,Pi_dic
    np.save(PROB_START, Pi_dic)

    for key in trans_dic:
        for key1 in trans_dic[key]:
            '''
            if A_dic[key][key1] != 0:
                A_dic[key][key1] = -1*math.log(A_dic[key][key1] / Count_dic[key])
            else:
                A_dic[key][key1] = 0
            '''
            trans_dic[key][key1] = trans_dic[key][key1] / Count_dic[key]
    # print >>trans_fp,A_dic
    # for k, v in A_dic:
    #     trans_fp.write(k+' '+str(v)+'\n')
    np.save(PROB_TRANS, trans_dic)

    for key in emit_dic:
        for word in emit_dic[key]:
            '''
            if B_dic[key][word] != 0:
                B_dic[key][word] = -1*math.log(B_dic[key][word] / Count_dic[key])
            else:
                B_dic[key][word] = 0
            '''
            emit_dic[key][word] = emit_dic[key][word] / Count_dic[key]

    # print >> emit_fp,B_dic
    np.save(PROB_EMIT, emit_dic)
    # for k, v in B_dic:
    #     emit_fp.write(k+' '+str(v)+'\n')
    # start_fp.close()
    # emit_fp.close()
    # trans_fp.close()


def main():
    # python HMM_train.py
    # if len(sys.argv) != 2:
    #     print ("Usage [%s] [input_data] " % (sys.argv[0]))
    #     sys.exit(0)
    ifp = codecs.open('RenMinData.txt_utf8', 'r', 'utf-8')
    init()
    global word_set
    global line_num
    for line in ifp.readlines():
        line_num += 1
        if line_num % 10000 == 0:
            print (line_num)

        line = line.strip()
        if not line:continue
        # line = line.decode("utf-8","ignore")
        # 1986年 ,
        # 十億 中華 兒女 踏上 新 的 徵 程 。
        # 過去 的 一 年 ,

        word_list = []  # [過,去,的,一,年,]一個一個的字
        for i in range(len(line)):
            if line[i] == " ":
                continue
            word_list.append(line[i])
        word_set = word_set | set(word_list)


        lineArr = line.split(" ")  # 【過去,的,一,年,】
        line_state = []  # BMMS, BMMS,S,BE
        for item in lineArr:
            line_state.extend(getList(item))
        # pdb.set_trace()
        if len(word_list) != len(line_state):
            print (sys.stderr,"[line_num = %d][line = %s]" % (line_num, line))
        else:
            for i in range(len(line_state)):
                if i == 0:
                    Pi_dic[line_state[0]] += 1
                    Count_dic[line_state[0]] += 1
                else:
                    trans_dic[line_state[i-1]][line_state[i]] += 1
                    Count_dic[line_state[i]] += 1
                    # if not B_dic[line_state[i]].has_key(word_list[i]):
                    if word_list[i] not in emit_dic[line_state[i]].keys():
                        emit_dic[line_state[i]][word_list[i]] = 0.0
                    else:
                        emit_dic[line_state[i]][word_list[i]] += 1
    Output()
    ifp.close()
if __name__ == "__main__":
    # main()
    a = np.load(PROB_EMIT, allow_pickle=True).item()
    print(a)
    print(type(a))
    b = np.load(PROB_START, allow_pickle=True).item()
    print(b)
    print(b['B'])
"""
{'B': {'B': 0.0, 'M': 0.1167175117318146, 'E': 0.8832824882681853, 'S': 0.0}, 
'M': {'B': 0.0, 'M': 0.2777743117140081, 'E': 0.0, 'S': 0.7222256882859919}, 
'E': {'B': 0.46951648068515556, 'M': 0.0, 'E': 0.0, 'S': 0.5304835193148444}, 
'S': {'B': 0.3607655156767958, 'M': 0.0, 'E': 0.0, 'S': 0.47108435638736734}}

"""

5.3.進行分詞

#-*-coding:utf-8

import numpy as np

prob_start = np.load("prob_start.npy", allow_pickle=True).item()
prob_trans = np.load("prob_trans.npy", allow_pickle=True).item()
prob_emit = np.load("prob_emit.npy", allow_pickle=True).item()


def viterbi(obs, states, start_p, trans_p, emit_p):
    V = [{}] #tabular
    path = {}
    for y in states: #init
        V[0][y] = start_p[y] * emit_p[y].get(obs[0],0)
        path[y] = [y]
    for t in range(1,len(obs)):
        V.append({})
        newpath = {}
        for y in states:
            (prob,state ) = max([(V[t-1][y0] * trans_p[y0].get(y,0) * emit_p[y].get(obs[t],0) ,y0) for y0 in states if V[t-1][y0]>0])
            V[t][y] =prob
            newpath[y] = path[state] + [y]
        path = newpath
    (prob, state) = max([(V[len(obs) - 1][y], y) for y in states])
    return (prob, path[state])

def cut(sentence):
    #pdb.set_trace()
    prob, pos_list =  viterbi(sentence,('B','M','E','S'), prob_start, prob_trans, prob_emit)
    return (prob,pos_list)

def pos2word(test_str, pos_list):
    res = ''
    for i in range(len(pos_list)):
        if pos_list[i]=='B':
            res = res+test_str[i]
            # print(test_str[i], end='')
        elif pos_list[i]=='E':
            res = res + test_str[i]+'/'
            # print(test_str[i],'/', end='')
        elif pos_list[i]=='M':
            res = res + test_str[i]
            # print(test_str[i], end='')
        else:
            res = res + test_str[i] + '/'
            # print(test_str[i], '/', end='')
    print(res.strip('/'))

if __name__ == "__main__":
    test_str = u"計算機和電腦"
    prob,pos_list = cut(test_str)
    print (test_str)
    print (pos_list)
    pos2word(test_str, pos_list)
    test_str = u"他說的確實在理."
    prob,pos_list = cut(test_str)
    print (test_str)
    print (pos_list)
    pos2word(test_str, pos_list)

    test_str = u"我有一臺電腦。"
    prob,pos_list = cut(test_str)
    print (test_str)
    print (pos_list)
    pos2word(test_str, pos_list)

計算機和電腦
[‘B’, ‘M’, ‘E’, ‘S’, ‘B’, ‘E’]
計算機/和/電腦
他說的確實在理.
[‘S’, ‘S’, ‘S’, ‘B’, ‘E’, ‘S’, ‘S’, ‘S’]
他/說/的/確實/在/理/.

我有一臺電腦。
[‘B’, ‘E’, ‘B’, ‘E’, ‘B’, ‘E’, ‘S’]
我有/一臺/電腦/。

6.代碼和數據

鏈接:https://pan.baidu.com/s/1QbeG8g_9JhVwJRhERViTMg
提取碼:80su
github:
https://github.com/shelleyHLX/machine-learning

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章