拿到某超市的銷售數據,將數據整理後得到一年三千萬條交易記錄,想試試用spark中的推薦系統做一下預測
先把數據導入到HDFS中,數據需要用戶id,商品id,和購買次數,這裏我拿購買次數當作電影推薦系統中的電影評分
HDFS中的數據用":"分割開。如下:
461365:22535:1.0
461365:5059:1.0
461365:5420:4.0
461366:1987:4.0
461366:31911:1.0
進入spark-shell
引入需要的mllib包和日誌的設置
import org.apache.spark.mllib.recommendation.{ALS, Rating,MatrixFactorizationModel}
import org.apache.spark.sql.hive.HiveContext
import org.apache.log4j.{Logger,Level}
import org.apache.spark.mllib.evaluation.{RankingMetrics, RegressionMetrics}
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
將數據導入,並劃分好存入ratings,這裏的rating其實就是購買次數
val data = sc.textFile("/input/rate")
val ratings = data.map(_.split(':') match { case Array(user, item, rate) => Rating(user.toInt, item.toInt, rate.toDouble)})
查看數據規模
scala> val users = ratings.map(_.user).distinct()
users: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1356] at distinct at <console>:35
scala> val products = ratings.map(_.product).distinct()
products: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1360] at distinct at <console>:35
scala> println("Got "+ratings.count()+" ratings from "+users.count+" users on "+products.count+" products.")
Got 30299054 ratings from 354172 users on 45786 products.
將數據劃分,我這裏用的8:2,val splits = ratings.randomSplit(Array(0.8, 0.2)) val training = splits(0) val test = splits(1)
進行訓練,並設置參數
Rank: 對應ALS模型中的因子個數,即矩陣分解出的兩個矩陣的新的行/列數
numIterations:模型迭代最大次數
參數0.01: 控制模型的正則化過程,從而控制模型的過擬合情況。
val rank = 30
val numIterations = 12
val model = ALS.train(training, rank, numIterations, 0.01)
然後將訓練結果得到的預測分和原始分合併在一起,算出rmse
val testUsersProducts = test.map { case Rating(user, product, rate) =>
(user, product)
}
val predictions = model.predict(testUsersProducts).map { case Rating(user, product, rate) =>
((user, product), rate)
}
val ratesAndPreds = ratings.map { case Rating(user, product, rate) =>
((user, product), rate)
}.join(predictions)
val rmse= math.sqrt(ratesAndPreds.map { case ((user, product), (r1, r2)) =>
val err = (r1 - r2)
err * err
}.mean())