http://www.ibm.com/developerworks/cn/java/j-lo-mahout/
Taste:
http://taste.sourceforge.net
Mahout currently has
- Collaborative Filtering
- User and Item based recommenders
- K-Means, Fuzzy K-Means clustering
- Mean Shift clustering
- Dirichlet process clustering
- Latent Dirichlet Allocation
- Singular value decomposition
- Parallel Frequent Pattern mining
- Complementary Naive Bayes classifier
- Random forest decision tree based classifier
- High performance java collections (previously colt collections)
- A vibrant community
- and many more cool stuff to come by this summer thanks to Google summer of code
mahout安裝(centos)
cd /usr/local
sudo mkdir mahout
sudo svn co http://svn.apache.org/repos/asf/mahout/trunk mahout
安裝maven3
cd /tmp
sudo wget http://apache.etoak.com//maven/binaries/apache-maven-3.0.2-bin.tar.gz
tar vxzf apache-maven-3.0.2-bin.tar.gz
sudo mv apache-maven-3.0.2 /usr/local/maven
vi ~/.bashrc
添加如下兩行
export M3_HOME=/usr/local/maven
export PATH=${M3_HOME}/bin:${PATH}
執行 . ~/.bashrc,使設置生效[或者先logout,之後再login]
查看maven版本,看是否安裝成功
mvn -version
安裝mahout
cd /usr/local/mahout
sudo mvn install
如果報JAVA_HOME is not set,如果是用sudo,檢查root的java設置
vi /etc/profile
export JAVA_HOME=/usr/local/jdk1.6/
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
執行. /etc/profile 再執行mvn clean install -DskipTests=true //skip tests,fast build
數據準備
cd /tmp
wget http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
hadoop fs -mkdir testdata
hadoop fs -put synthetic_control.data testdata
hadoop fs -lsr testdata
如果報HADOOP_HOME環境變量沒有設置
sudo vi /etc/profile,添加
export HADOOP_HOME=/usr/lib/hadoop-0.20/
hadoop集羣來執行聚類算法
cd /usr/local/mahout
bin/mahout org.apache.mahout.clustering.syntheticcontrol.canopy.Job
bin/mahout org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
bin/mahout org.apache.mahout.clustering.syntheticcontrol.fuzzykmeans.Job
bin/mahout org.apache.mahout.clustering.syntheticcontrol.dirichlet.Job
bin/mahout org.apache.mahout.clustering.syntheticcontrol.meanshift.Job
如果執行成功,在hdfs的/user/dev/output裏面應該可以看到輸出結果
GroupLens Data Sets
http://www.grouplens.org/node/12,包括MovieLens Data Sets、Wikilens Data Set、Book-Crossing Data Set、Jester Joke Data Set、EachMovie Data
Set
下載1m的rating數據
mkdir 1m_rating
wget http://www.grouplens.org/system/files/million-ml-data.tar__0.gz
tar vxzf million-ml-data.tar__0.gz
rm million-ml-data.tar__0.gz
拷貝數據到grouplens代碼的目錄,我們先本地測試下mahout的威力
cp *.dat /usr/local/mahout/examples/src/main/java/org/apache/mahout/cf/taste/example/grouplens
cd /usr/local/mahout/examples/
執行
mvn -q exec:java -Dexec.mainClass="org.apache.mahout.cf.taste.example.grouplens.GroupLensRecommenderEvaluatorRunner"
如果不想做上面拷貝文件的操作,則指定輸入文件位置就行,如下:
mvn -q exec:java -Dexec.mainClass="org.apache.mahout.cf.taste.example.grouplens.GroupLensRecommenderEvaluatorRunner" -Dexec.args="-i input——file"
上傳到hdfs
hadoop fs -copyFromLocal 1m_rating/ mahout_input/1mrating
補充
mahout,svn地址:https://svn.apache.org/repos/asf/mahout/trunk
https://cwiki.apache.org/MAHOUT/creating-vectors-from-text.html
將lucene索引數據轉換成文本向量,指定索引目錄~/index 字段名稱Name,索引臨時輸出文件~/dict.txt ,最終結果輸出文件路徑output.txt,並限制最大向量數目50
$/usr/local/mahout/bin/mahout lucene.vector --dir ~/index --field Name --dictOut ~/dict.txt --output output.txt --max 50 --norm 2
查看下dict的文件內容
$head -n dict.txt
10225
#term doc freq idx
Michale 67 0
medcl 1 1
jack 3 2
lopoo 2 3
003 2 4
由上面的數據可見,dict.txt裏面是我們的指定的Name字段的索引信息
使用taste-web來快速構建基於grouplens數據集的電影推薦系統
$cd taste-web/
拷貝grouplens的推薦包到taste-web的lib目錄下,如果jar包還沒有,轉到目錄執行mvn install即可
$ cp examples/target/grouplens.jar taste-web/lib/
taste-web]$ vi recommender.properties
取消掉這一行的註釋,配置使用grouplens的recommender,如下:
recommender.class=org.apache.mahout.cf.taste.example.grouplens.GroupLensRecommender
啓動jetty,如果一切正常,訪問8080端口,可以看到有這麼個webservice,http://platformb:8080/RecommenderService.jws
mvn jetty:run-war
執行如下命令,查看推薦結果:http://platformb:8080/RecommenderServlet?userID=1
看截圖1,2,結果的第一列表示推薦的評分,第二項爲電影的id,簡單幾步就完成了一個推薦功能,是不是很強悍啊。
彪悍的配置文件們
本文來自: Mahout 安裝配置