深度學習正在被濫用

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"size","attrs":{"size":10}},{"type":"strong"}],"text":"本文最初發佈於Medium網站,經原作者授權由InfoQ中文站翻譯並分享。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在某些情況下,神經網絡之類模型的表現可能會勝過更簡單的模型,但很多情況下事情並不是這樣的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"打個比方:假設你需要購買某種交通工具來跑運輸,如果你經常需要長距離運輸大型物品,那麼,購買卡車是很划算的投資;但如果你只是要去本地超市買點牛奶,那麼買一輛卡車就太浪費了。一輛汽車(如果你關心氣候變化的話,甚至可以買一輛自行車)也足以完成上述任務。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"深度學習的使用場景也開始遇到這種問題了:我們假設它們的性能優於簡單模型,然後把相關數據一股腦兒地塞給它們。此外,我們在應用這些模型時往往並沒有對相關數據有適當的理解;比如說我們沒有意識到,如果對數據有直觀的瞭解,就不必進行深度學習。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/bb\/44\/bbbd0ccfab11e726253f6e52c2519144.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"任何模型被裝在黑匣子裏來分析數據時,總是會存在危險,深度學習家族的模型也不例外。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"時間序列分析"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我最常用的是時間序列分析,因此我們來考慮一個這方面的例子。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"假設一家酒店希望預測其在整個客戶羣中收取的平均每日費用(或每天的平均費用)——ADR。每位客戶的平均每日費用是每週開銷的平均值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LSTM模型的配置如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model = tf.keras.Sequential()\nmodel.add(LSTM(4, input_shape=(1, lookback)))\nmodel.add(Dense(1))\nmodel.compile(loss='mean_squared_error', optimizer='adam')\nhistory=model.fit(X_train, Y_train, validation_split=0.2, epochs=100, batch_size=1, verbose=2)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面是預測與實際的每週ADR:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/82\/cf\/827462e0711f19458e6c3885675805cf.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"資料來源:Jupyter Notebook輸出"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"獲得的RMSE爲31,均值160。RMSE(均方根誤差)的大小是平均ADR大小的20%。誤差並不算高,但不得不承認,神經網絡的目的是儘可能獲得比其他模型更高的準確度,所以這個結果還是有些令人失望。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,這個LSTM模型是一個一步預測——意味着如果沒有可用的時間t之前的所有數據,該模型就無法進行長期預測。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"也就是說,我們是不是太急着對數據應用LSTM模型了呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們先回到出發點,首先對數據做一個全面的分析。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面是ADR波動的7周移動平均值:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/bd\/b6\/bd903d0a7d4b26eee31d52f8c593c9b6.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"資料來源:Jupyter Notebook輸出"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當數據通過7周的移動平均值進行平滑處理後,我們可以清楚地看到季節性模式的證據。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們來仔細看看數據的自相關函數。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/76\/76c9a119f39989188ef003da862c7de2.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"資料來源:Jupyter Notebook輸出"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們可以看到,峯值相關性(在一系列負相關性之後)滯後52,表明數據中存在年度季節屬性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有了這一信息後,我們可以使用pmdarima配置ARIMA模型來預測ADR波動的最後15周,並自動選擇p、d、q座標以最小化赤池量信息準則。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> Arima_model=pm.auto_arima(train_df, start_p=0, start_q=0, max_p=10, max_q=10, start_P=0, start_Q=0, max_P=10, max_Q=10, m=52, stepwise=True, seasonal=True, information_criterion='aic', trace=True, d=1, D=1, error_action='warn', suppress_warnings=True, random_state = 20, n_fits=30)Performing stepwise search to minimize aic\nARIMA(0,1,0)(0,1,0)[52] : AIC=422.399, Time=0.27 sec\nARIMA(1,1,0)(1,1,0)[52] : AIC=inf, Time=16.12 sec\nARIMA(0,1,1)(0,1,1)[52] : AIC=inf, Time=19.08 sec\nARIMA(0,1,0)(1,1,0)[52] : AIC=inf, Time=14.55 sec\nARIMA(0,1,0)(0,1,1)[52] : AIC=inf, Time=11.94 sec\nARIMA(0,1,0)(1,1,1)[52] : AIC=inf, Time=16.47 sec\nARIMA(1,1,0)(0,1,0)[52] : AIC=414.708, Time=0.56 sec\nARIMA(1,1,0)(0,1,1)[52] : AIC=inf, Time=15.98 sec\nARIMA(1,1,0)(1,1,1)[52] : AIC=inf, Time=20.41 sec\nARIMA(2,1,0)(0,1,0)[52] : AIC=413.878, Time=1.01 sec\nARIMA(2,1,0)(1,1,0)[52] : AIC=inf, Time=22.19 sec\nARIMA(2,1,0)(0,1,1)[52] : AIC=inf, Time=25.80 sec\nARIMA(2,1,0)(1,1,1)[52] : AIC=inf, Time=28.23 sec\nARIMA(3,1,0)(0,1,0)[52] : AIC=414.514, Time=1.13 sec\nARIMA(2,1,1)(0,1,0)[52] : AIC=415.165, Time=2.18 sec\nARIMA(1,1,1)(0,1,0)[52] : AIC=413.365, Time=1.11 sec\nARIMA(1,1,1)(1,1,0)[52] : AIC=415.351, Time=24.93 sec\nARIMA(1,1,1)(0,1,1)[52] : AIC=inf, Time=21.92 sec\nARIMA(1,1,1)(1,1,1)[52] : AIC=inf, Time=30.36 sec\nARIMA(0,1,1)(0,1,0)[52] : AIC=411.433, Time=0.59 sec\nARIMA(0,1,1)(1,1,0)[52] : AIC=413.422, Time=11.57 sec\nARIMA(0,1,1)(1,1,1)[52] : AIC=inf, Time=23.39 sec\nARIMA(0,1,2)(0,1,0)[52] : AIC=413.343, Time=0.82 sec\nARIMA(1,1,2)(0,1,0)[52] : AIC=415.196, Time=1.63 sec\nARIMA(0,1,1)(0,1,0)[52] intercept : AIC=413.377, Time=1.04 sec\nBest model: ARIMA(0,1,1)(0,1,0)[52]\nTotal fit time: 313.326 seconds\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根據上面的輸出,ARIMA(0,1,1)(0,1,0)[52]是AIC的最佳擬合模型。使用這個模型,對於160的平均ADR,可獲得10的RMSE。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這比LSTM實現的RMSE要低得多(這是一件好事),僅佔均值大小的6%多。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對數據進行適當的分析後,人們會認識到,數據中存在的年度季節屬性可以讓時間序列更具可預測性,而使用深度學習模型來嘗試預測這種屬性在很大程度上是多餘的。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"迴歸分析:預測客戶ADR值"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們換個角度來討論上述問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"現在我們不再嘗試預測平均每週ADR,而是嘗試預測每個客戶的ADR值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"爲此我們使用兩個基於迴歸的模型:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"線性SVM(支持向量機)"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基於迴歸的神經網絡"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"兩種模型均使用以下特徵來預測每個客戶的ADR值:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"IsCanceled:客戶是否取消預訂"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"country:客戶的原籍國"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"marketsegment:客戶的細分市場"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"deposittype:客戶是否已支付訂金"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"customertype:客戶類型"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"rcps:所需的停車位"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"arrivaldateweekno:到達的星期數"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們使用平均絕對誤差作爲效果指標,來對比兩個模型相對於平均值獲得的MAE。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"線性支持向量機"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這裏定義了epsilon爲0.5的LinearSVR,並使用訓練數據進行了訓練:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"svm_reg_05 = LinearSVR(epsilon=0.5)\nsvm_reg_05.fit(X_train, y_train)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"現在使用測試集中的特徵值進行預測:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> svm_reg_05.predict(atest)array([ 81.7431138 , 107.46098525, 107.46098525, ..., 94.50144931,\n94.202052 , 94.50144931])\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這是相對於均值的均值絕對誤差:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> mean_absolute_error(btest, bpred)\n30.332614341027753>>> np.mean(btest)\n105.30446539770578\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MAE是均值大小的28%。讓我們看看基於迴歸的神經網絡是否可以做得更好。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"基於迴歸的神經網絡"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"神經網絡的定義如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model = Sequential()\nmodel.add(Dense(8, input_dim=8, kernel_initializer='normal', activation='elu'))\nmodel.add(Dense(2670, activation='elu'))\nmodel.add(Dense(1, activation='linear'))\nmodel.summary()\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用的批大小是150,用30個epoch訓練模型:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model.compile(loss='mse', optimizer='adam', metrics=['mse','mae'])\nhistory=model.fit(xtrain_scale, ytrain_scale, epochs=30, batch_size=150, verbose=1, validation_split=0.2)\npredictions = model.predict(xval_scale)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"現在將測試集的特徵輸入到模型中,以下是MAE和平均值:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> mean_absolute_error(btest, bpred)\n28.908454264679218>>> np.mean(btest)\n105.30446539770578\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我們看到,MAE僅僅比使用SVM所獲得的MAE低一點。因此,當線性SVM模型顯示出幾乎相同的準確度時,很難證明使用神經網絡來預測客戶ADR是合適的選項。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"無論如何,用於“解釋”ADR的特徵選擇之類的因素比模型本身有着更大的相關性。俗話說,“進垃圾,出垃圾”。如果特徵選取很爛,模型輸出也會很差。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在上面這個例子裏,儘管兩個迴歸模型都顯示出一定程度的預測能力,但很可能要麼1)選擇數據集中的其他特徵可以進一步提高準確性,要麼2)ADR的變量太多,對數據集中特徵的影響太大。例如,數據集沒有告訴我們關於每個客戶收入水平的任何信息,這些因素將極大地影響他們每天的平均支出。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"結論"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在上面的兩個示例中我們已經看到,使用“更輕”的模型已經能夠匹配(或超過)深度學習模型所實現的準確性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在某些情況下,數據可能非常複雜,需要“從頭開始”在數據中使用算法學習模式,但這往往是例外,而不是規則。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於任何數據科學問題,關鍵是首先要了解我們正在使用的數據,模型的選擇往往是次要的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以在此處("},{"type":"link","attrs":{"href":"https:\/\/github.com\/MGCodesandStats\/hotel-modelling?fileGuid=kJJ8Rqv6dQvWPQgh","title":"","type":null},"content":[{"type":"text","text":"https:\/\/github.com\/MGCodesandStats\/hotel-modelling"}]},{"type":"text","text":")找到上述示例的數據集和Jupyter筆記本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/towardsdatascience.com\/deep-learning-is-becoming-overused-1e6b08bc709f?fileGuid=kJJ8Rqv6dQvWPQgh","title":"","type":null},"content":[{"type":"text","text":"https:\/\/towardsdatascience.com\/deep-learning-is-becoming-overused-1e6b08bc709f"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章