深度学习正在被滥用

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"size","attrs":{"size":10}},{"type":"strong"}],"text":"本文最初发布于Medium网站,经原作者授权由InfoQ中文站翻译并分享。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在某些情况下,神经网络之类模型的表现可能会胜过更简单的模型,但很多情况下事情并不是这样的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"打个比方:假设你需要购买某种交通工具来跑运输,如果你经常需要长距离运输大型物品,那么,购买卡车是很划算的投资;但如果你只是要去本地超市买点牛奶,那么买一辆卡车就太浪费了。一辆汽车(如果你关心气候变化的话,甚至可以买一辆自行车)也足以完成上述任务。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"深度学习的使用场景也开始遇到这种问题了:我们假设它们的性能优于简单模型,然后把相关数据一股脑儿地塞给它们。此外,我们在应用这些模型时往往并没有对相关数据有适当的理解;比如说我们没有意识到,如果对数据有直观的了解,就不必进行深度学习。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/bb\/44\/bbbd0ccfab11e726253f6e52c2519144.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"任何模型被装在黑匣子里来分析数据时,总是会存在危险,深度学习家族的模型也不例外。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"时间序列分析"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我最常用的是时间序列分析,因此我们来考虑一个这方面的例子。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"假设一家酒店希望预测其在整个客户群中收取的平均每日费用(或每天的平均费用)——ADR。每位客户的平均每日费用是每周开销的平均值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"LSTM模型的配置如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model = tf.keras.Sequential()\nmodel.add(LSTM(4, input_shape=(1, lookback)))\nmodel.add(Dense(1))\nmodel.compile(loss='mean_squared_error', optimizer='adam')\nhistory=model.fit(X_train, Y_train, validation_split=0.2, epochs=100, batch_size=1, verbose=2)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面是预测与实际的每周ADR:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/82\/cf\/827462e0711f19458e6c3885675805cf.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"资料来源:Jupyter Notebook输出"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"获得的RMSE为31,均值160。RMSE(均方根误差)的大小是平均ADR大小的20%。误差并不算高,但不得不承认,神经网络的目的是尽可能获得比其他模型更高的准确度,所以这个结果还是有些令人失望。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,这个LSTM模型是一个一步预测——意味着如果没有可用的时间t之前的所有数据,该模型就无法进行长期预测。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"也就是说,我们是不是太急着对数据应用LSTM模型了呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们先回到出发点,首先对数据做一个全面的分析。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面是ADR波动的7周移动平均值:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/bd\/b6\/bd903d0a7d4b26eee31d52f8c593c9b6.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"资料来源:Jupyter Notebook输出"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当数据通过7周的移动平均值进行平滑处理后,我们可以清楚地看到季节性模式的证据。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们来仔细看看数据的自相关函数。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/76\/76c9a119f39989188ef003da862c7de2.jpeg","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","marks":[{"type":"size","attrs":{"size":10}}],"text":"资料来源:Jupyter Notebook输出"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们可以看到,峰值相关性(在一系列负相关性之后)滞后52,表明数据中存在年度季节属性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有了这一信息后,我们可以使用pmdarima配置ARIMA模型来预测ADR波动的最后15周,并自动选择p、d、q座标以最小化赤池量信息准则。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> Arima_model=pm.auto_arima(train_df, start_p=0, start_q=0, max_p=10, max_q=10, start_P=0, start_Q=0, max_P=10, max_Q=10, m=52, stepwise=True, seasonal=True, information_criterion='aic', trace=True, d=1, D=1, error_action='warn', suppress_warnings=True, random_state = 20, n_fits=30)Performing stepwise search to minimize aic\nARIMA(0,1,0)(0,1,0)[52] : AIC=422.399, Time=0.27 sec\nARIMA(1,1,0)(1,1,0)[52] : AIC=inf, Time=16.12 sec\nARIMA(0,1,1)(0,1,1)[52] : AIC=inf, Time=19.08 sec\nARIMA(0,1,0)(1,1,0)[52] : AIC=inf, Time=14.55 sec\nARIMA(0,1,0)(0,1,1)[52] : AIC=inf, Time=11.94 sec\nARIMA(0,1,0)(1,1,1)[52] : AIC=inf, Time=16.47 sec\nARIMA(1,1,0)(0,1,0)[52] : AIC=414.708, Time=0.56 sec\nARIMA(1,1,0)(0,1,1)[52] : AIC=inf, Time=15.98 sec\nARIMA(1,1,0)(1,1,1)[52] : AIC=inf, Time=20.41 sec\nARIMA(2,1,0)(0,1,0)[52] : AIC=413.878, Time=1.01 sec\nARIMA(2,1,0)(1,1,0)[52] : AIC=inf, Time=22.19 sec\nARIMA(2,1,0)(0,1,1)[52] : AIC=inf, Time=25.80 sec\nARIMA(2,1,0)(1,1,1)[52] : AIC=inf, Time=28.23 sec\nARIMA(3,1,0)(0,1,0)[52] : AIC=414.514, Time=1.13 sec\nARIMA(2,1,1)(0,1,0)[52] : AIC=415.165, Time=2.18 sec\nARIMA(1,1,1)(0,1,0)[52] : AIC=413.365, Time=1.11 sec\nARIMA(1,1,1)(1,1,0)[52] : AIC=415.351, Time=24.93 sec\nARIMA(1,1,1)(0,1,1)[52] : AIC=inf, Time=21.92 sec\nARIMA(1,1,1)(1,1,1)[52] : AIC=inf, Time=30.36 sec\nARIMA(0,1,1)(0,1,0)[52] : AIC=411.433, Time=0.59 sec\nARIMA(0,1,1)(1,1,0)[52] : AIC=413.422, Time=11.57 sec\nARIMA(0,1,1)(1,1,1)[52] : AIC=inf, Time=23.39 sec\nARIMA(0,1,2)(0,1,0)[52] : AIC=413.343, Time=0.82 sec\nARIMA(1,1,2)(0,1,0)[52] : AIC=415.196, Time=1.63 sec\nARIMA(0,1,1)(0,1,0)[52] intercept : AIC=413.377, Time=1.04 sec\nBest model: ARIMA(0,1,1)(0,1,0)[52]\nTotal fit time: 313.326 seconds\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根据上面的输出,ARIMA(0,1,1)(0,1,0)[52]是AIC的最佳拟合模型。使用这个模型,对于160的平均ADR,可获得10的RMSE。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这比LSTM实现的RMSE要低得多(这是一件好事),仅占均值大小的6%多。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对数据进行适当的分析后,人们会认识到,数据中存在的年度季节属性可以让时间序列更具可预测性,而使用深度学习模型来尝试预测这种属性在很大程度上是多余的。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"回归分析:预测客户ADR值"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们换个角度来讨论上述问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现在我们不再尝试预测平均每周ADR,而是尝试预测每个客户的ADR值。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为此我们使用两个基于回归的模型:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"线性SVM(支持向量机)"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于回归的神经网络"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"两种模型均使用以下特征来预测每个客户的ADR值:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"IsCanceled:客户是否取消预订"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"country:客户的原籍国"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"marketsegment:客户的细分市场"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"deposittype:客户是否已支付订金"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"customertype:客户类型"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"rcps:所需的停车位"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"arrivaldateweekno:到达的星期数"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们使用平均绝对误差作为效果指标,来对比两个模型相对于平均值获得的MAE。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"线性支持向量机"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里定义了epsilon为0.5的LinearSVR,并使用训练数据进行了训练:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"svm_reg_05 = LinearSVR(epsilon=0.5)\nsvm_reg_05.fit(X_train, y_train)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现在使用测试集中的特征值进行预测:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> svm_reg_05.predict(atest)array([ 81.7431138 , 107.46098525, 107.46098525, ..., 94.50144931,\n94.202052 , 94.50144931])\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这是相对于均值的均值绝对误差:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> mean_absolute_error(btest, bpred)\n30.332614341027753>>> np.mean(btest)\n105.30446539770578\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MAE是均值大小的28%。让我们看看基于回归的神经网络是否可以做得更好。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"基于回归的神经网络"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"神经网络的定义如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model = Sequential()\nmodel.add(Dense(8, input_dim=8, kernel_initializer='normal', activation='elu'))\nmodel.add(Dense(2670, activation='elu'))\nmodel.add(Dense(1, activation='linear'))\nmodel.summary()\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用的批大小是150,用30个epoch训练模型:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model.compile(loss='mse', optimizer='adam', metrics=['mse','mae'])\nhistory=model.fit(xtrain_scale, ytrain_scale, epochs=30, batch_size=150, verbose=1, validation_split=0.2)\npredictions = model.predict(xval_scale)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现在将测试集的特征输入到模型中,以下是MAE和平均值:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":">>> mean_absolute_error(btest, bpred)\n28.908454264679218>>> np.mean(btest)\n105.30446539770578\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们看到,MAE仅仅比使用SVM所获得的MAE低一点。因此,当线性SVM模型显示出几乎相同的准确度时,很难证明使用神经网络来预测客户ADR是合适的选项。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"无论如何,用于“解释”ADR的特征选择之类的因素比模型本身有着更大的相关性。俗话说,“进垃圾,出垃圾”。如果特征选取很烂,模型输出也会很差。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在上面这个例子里,尽管两个回归模型都显示出一定程度的预测能力,但很可能要么1)选择数据集中的其他特征可以进一步提高准确性,要么2)ADR的变量太多,对数据集中特征的影响太大。例如,数据集没有告诉我们关于每个客户收入水平的任何信息,这些因素将极大地影响他们每天的平均支出。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"结论"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在上面的两个示例中我们已经看到,使用“更轻”的模型已经能够匹配(或超过)深度学习模型所实现的准确性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在某些情况下,数据可能非常复杂,需要“从头开始”在数据中使用算法学习模式,但这往往是例外,而不是规则。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于任何数据科学问题,关键是首先要了解我们正在使用的数据,模型的选择往往是次要的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以在此处("},{"type":"link","attrs":{"href":"https:\/\/github.com\/MGCodesandStats\/hotel-modelling?fileGuid=kJJ8Rqv6dQvWPQgh","title":"","type":null},"content":[{"type":"text","text":"https:\/\/github.com\/MGCodesandStats\/hotel-modelling"}]},{"type":"text","text":")找到上述示例的数据集和Jupyter笔记本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文链接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/towardsdatascience.com\/deep-learning-is-becoming-overused-1e6b08bc709f?fileGuid=kJJ8Rqv6dQvWPQgh","title":"","type":null},"content":[{"type":"text","text":"https:\/\/towardsdatascience.com\/deep-learning-is-becoming-overused-1e6b08bc709f"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章