Lucene系列九:猜你喜歡

在電商、新聞媒體等領域經常會遇到這樣的需求,根據用戶搜索的內容推薦它可能感興趣的內容。爲了避免話題延伸太廣,什麼用戶畫像模型及構建這些不做論述,本文僅從這類需求中挑選其中一種“找出某篇文檔的相似文檔,常見於類似新聞相關文章”,這個其實就是完全是基於內容的分析,也就是相似檢索。

背景

MoreLikeThis是Lucene的一個相似搜索組件。它產生的背景:

Generate "more like this" similarity queries. Based on this mail:

       Lucene does let you access the document frequency of terms, with IndexReader.docFreq().Term frequencies can be computed by re-tokenizing the text, which, for a single document,is usually fast enough.  But looking up the docFreq() of every term in the document is probably too slow.

       You can use some heuristics to prune the set of terms, to avoid calling docFreq() too much, or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested in terms with a high tf. Choosing a tf threshold even as low as two or three will radically reduce the number of terms under consideration.  Another heuristic is that terms with a high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the number of characters, not selecting anything less than, e.g., six or seven characters.With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms that do a pretty good job of characterizing a document.

        It all depends on what you're trying to do.  If you're trying to eek out that last percent of precision and recall egardless of computational difficulty so that you can win a TREC  competition, then the techniques I mention above are useless.  But if you're trying to provide a "more like this" button on a search results page that does a decent job and has good performance, such techniques might be useful.

       An efficient, effective "more-like-this" query generator would be a great contribution, if  anyone's interested.  I'd imagine that it would take a Reader or a String (the document's  text), analyzer Analyzer, and return a set of epresentative terms using heuristics like those  above.  The frequency and length thresholds could be parameters, etc.
 Doug

上面這段話非常重要(核心思想),透露了好幾層意思,這裏我歸納總結一下:

  • Lucene允許您使用IndexReader.docFreq()訪問某個詞在所有文檔頻率(出現次數)
  • docFreq()在單個文檔中調用非常快,爲了避免過多地調用它,通過添加詞頻和文檔頻率加以限制
  • 通過調整詞頻和文檔頻率就可以做到“more like this”
  • 它支持Lucene中某個字段或單獨一個Reader或String進行輸入項
  • 它需要用到分析器Analyzer和similarity(打分)

如何使用呢?

你可以把它視爲一個工具類(可以生成query語句),對,在戰略上藐視它,它語法結構如下:

IndexReader ir = ...
IndexSearcher is = ...
MoreLikeThis mlt = new MoreLikeThis(ir);
Reader target = ... // orig source of doc you want to find similarities to
Query query = mlt.like( target);
Hits hits = is.search(query);

MoreLikeThis重要的參數:

  • private Analyzer analyzer             分詞器
  • private int minTermFreq                 詞頻最小值(默認2)
  • private int minDocFreq                   文檔頻率最小值(默認5)
  • private int maxDocFreq                  文檔頻率最大值(默認2147483647)
  • private int maxQueryTerms             查詢詞的數組大小(默認25)
  • private TFIDFSimilarity similarity     計算相關度

重要的方法:

  • public Query like(int docNum)                                                   按lucene的documentId查詢
  • public Query like(String fieldName, Reader... readers)             按string類型輸入查詢

示例代碼

public class ScoreSort_Test {
   @Test
    public void go_explain() throws IOException {
        Directory directory = new RAMDirectory();
        IndexWriterConfig config = new IndexWriterConfig();
        config.setUseCompoundFile(true);
        IndexWriter writer = new IndexWriter(directory, config);
        String feildName = "title";
        Field f1 = new TextField(feildName, "life", Field.Store.YES);
        Field f2 = new TextField(feildName, "work", Field.Store.YES);
        Field f3 = new TextField(feildName, "easy for any of us", Field.Store.YES);
        TextField f4 = new TextField(feildName, "above believe us", Field.Store.YES);
        Document doc1 = new Document();
        Document doc2 = new Document();
        Document doc3 = new Document();
        Document doc4 = new Document();
        doc1.add(f1);
        doc2.add(f2);
        doc3.add(f3);
        doc4.add(f4);
        writer.addDocument(doc1);
        writer.addDocument(doc2);
        writer.addDocument(doc3);
        writer.addDocument(doc4);
        writer.close();
        IndexReader reader = DirectoryReader.open(directory);
        IndexSearcher searcher = new IndexSearcher(reader);
        MoreLikeThis mlt = new MoreLikeThis(reader);
        Analyzer analyzer = new StandardAnalyzer();
        mlt.setAnalyzer(analyzer);//必須支持,否則java.lang.UnsupportedOperationException: To use MoreLikeThis without term vectors, you must provide an Analyzer
        mlt.setFieldNames(new String[]{feildName});//用於計算的字段
        mlt.setMinTermFreq(1); // 默認值是2
        mlt.setMinDocFreq(1); // 默認值是5
        int maxDoc = reader.maxDoc();
        System.out.println("numDocs:" + maxDoc);
        //Query query = mlt.like(docID);
        Query query = mlt.like(feildName, new StringReader("Life is not easy for any of us. We must work,and above all we must believe in ourselves .We must believe ..."));
        System.out.println("query:" + query);
        System.out.println("believe docFreq:" + reader.docFreq(new Term(feildName, "believe")));
        System.out.println("us docFreq:" + reader.docFreq(new Term(feildName, "us")));
        TopDocs topDocs = searcher.search(query, 10);
        ScoreDoc[] scoreDocs = topDocs.scoreDocs;
        for (ScoreDoc scoreDoc : scoreDocs) {
            Document document = searcher.doc(scoreDoc.doc);
            System.out.print("相關度:" + scoreDoc.score);
            System.out.print("  ");
            System.out.print(document);
            System.out.println();
        }
        reader.close();
        directory.close();
    }

    public class StringReader extends Reader {
        private int pos = 0, size = 0;
        private String s = null;

        public StringReader(String s) {
            setValue(s);
        }
        void setValue(String s) {
            this.s = s;
            this.size = s.length();
            this.pos = 0;
        }
        @Override
        public int read() {
            if (pos < size) {
                return s.charAt(pos++);
            } else {
                s = null;
                return -1;
            }
        }
        @Override
        public int read(char[] c, int off, int len) {
            if (pos < size) {
                len = Math.min(len, size - pos);
                s.getChars(pos, pos + len, c, off);
                pos += len;
                return len;
            } else {
                s = null;
                return -1;
            }
        }
        @Override
        public void close() {
            pos = size; // this prevents NPE when reading after close!
            s = null;
        }
    }

}

numDocs:4
query:title:us title:life title:above title:any title:easy title:work title:believe
believe docFreq:1
us docFreq:2
相關度:2.574492  Document<stored,indexed,tokenized<title:easy for any of us>>
相關度:2.574492  Document<stored,indexed,tokenized<title:above believe us>>
相關度:1.5135659  Document<stored,indexed,tokenized<title:life>>
相關度:1.5135659  Document<stored,indexed,tokenized<title:work>>

解釋一下:query就是最終的結果,us(docFreq=2,termFreq=1),believe(docFreq=1,termFreq=2)

源碼解析

//引自org.apache.lucene.queries.mlt.MoreLikeThis
public final class MoreLikeThis {

   public Query like(String fieldName, Reader... readers) throws IOException {
        HashMap perFieldTermFrequencies = new HashMap();
        Reader[] var4 = readers;
        int var5 = readers.length;

        for(int var6 = 0; var6 < var5; ++var6) {
            Reader r = var4[var6];
            this.addTermFrequencies((Reader)r, (Map)perFieldTermFrequencies, fieldName);
        }

        return this.createQuery(this.createQueue(perFieldTermFrequencies));
    }
    //step1:分詞並計算詞頻,保存到詞頻數組中perFieldTermFrequencies
    private void addTermFrequencies(Reader r, Map<String, Map<String, MoreLikeThis.Int>> perFieldTermFrequencies, String fieldName) throws IOException {
        if(this.analyzer == null) {
            throw new UnsupportedOperationException("To use MoreLikeThis without term vectors, you must provide an Analyzer");
        } else {
            Object termFreqMap = (Map)perFieldTermFrequencies.get(fieldName);
            if(termFreqMap == null) {
                termFreqMap = new HashMap();
                perFieldTermFrequencies.put(fieldName, termFreqMap);
            }

            TokenStream ts = this.analyzer.tokenStream(fieldName, r);
            Throwable var6 = null;

            try {
                int tokenCount = 0;
                CharTermAttribute termAtt = (CharTermAttribute)ts.addAttribute(CharTermAttribute.class);
                ts.reset();

                while(ts.incrementToken()) {
                    String word = termAtt.toString();
                    ++tokenCount;
                    if(tokenCount > this.maxNumTokensParsed) {
                        break;
                    }

                    if(!this.isNoiseWord(word)) {
                        MoreLikeThis.Int cnt = (MoreLikeThis.Int)((Map)termFreqMap).get(word);
                        if(cnt == null) {
                            ((Map)termFreqMap).put(word, new MoreLikeThis.Int());
                        } else {
                            ++cnt.x;
                        }
                    }
                }

                ts.end();
            } catch (Throwable var18) {
                var6 = var18;
                throw var18;
            } finally {
                if(ts != null) {
                    if(var6 != null) {
                        try {
                            ts.close();
                        } catch (Throwable var17) {
                            var6.addSuppressed(var17);
                        }
                    } else {
                        ts.close();
                    }
                }

            }

        }
    }
    //step2:核心實現,對詞頻數組一一過濾,滿足要求的詞頻計算score
    private PriorityQueue<MoreLikeThis.ScoreTerm> createQueue(Map<String, Map<String, MoreLikeThis.Int>> perFieldTermFrequencies) throws IOException {
        int numDocs = this.ir.numDocs();
        int limit = Math.min(this.maxQueryTerms, this.getTermsCount(perFieldTermFrequencies));
        MoreLikeThis.FreqQ queue = new MoreLikeThis.FreqQ(limit);
        Iterator var5 = perFieldTermFrequencies.entrySet().iterator();

        label48:
        while(var5.hasNext()) {
            Entry entry = (Entry)var5.next();
            Map perWordTermFrequencies = (Map)entry.getValue();
            String fieldName = (String)entry.getKey();
            Iterator var9 = perWordTermFrequencies.entrySet().iterator();

            while(true) {
                String word;
                int tf;
                int docFreq;
                do {
                    do {
                        if(!var9.hasNext()) {
                            continue label48;
                        }

                        Entry tfEntry = (Entry)var9.next();
                        word = (String)tfEntry.getKey();
                        tf = ((MoreLikeThis.Int)tfEntry.getValue()).x;
                    } while(this.minTermFreq > 0 && tf < this.minTermFreq);

                    docFreq = this.ir.docFreq(new Term(fieldName, word));
                } while(this.minDocFreq > 0 && docFreq < this.minDocFreq);

                if(docFreq <= this.maxDocFreq && docFreq != 0) {
                    float idf = this.similarity.idf((long)docFreq, (long)numDocs);
                    float score = (float)tf * idf;
                    if(queue.size() < limit) {
                        queue.add(new MoreLikeThis.ScoreTerm(word, fieldName, score, idf, docFreq, tf));
                    } else {
                        MoreLikeThis.ScoreTerm term = (MoreLikeThis.ScoreTerm)queue.top();
                        if(term.score < score) {
                            term.update(word, fieldName, score, idf, docFreq, tf);
                            queue.updateTop();
                        }
                    }
                }
            }
        }

        return queue;
    }
    //step3:生成query語句(之前“Lucene系列七:搜索過程和IndexSearcher”講過的)
    private Query createQuery(PriorityQueue<MoreLikeThis.ScoreTerm> q) {
        Builder query = new Builder();
        float bestScore = -1.0F;

        MoreLikeThis.ScoreTerm scoreTerm;
        while((scoreTerm = (MoreLikeThis.ScoreTerm)q.pop()) != null) {
            Object tq = new TermQuery(new Term(scoreTerm.topField, scoreTerm.word));
            if(this.boost) {
                if(bestScore == -1.0F) {
                    bestScore = scoreTerm.score;
                }

                float ignore = scoreTerm.score;
                tq = new BoostQuery((Query)tq, this.boostFactor * ignore / bestScore);
            }

            try {
                query.add((Query)tq, Occur.SHOULD);
            } catch (TooManyClauses var7) {
                break;
            }
        }

        return query.build();
    }
}

總結,“猜你喜歡”這類需要,lucene對它的支持目前比較有限,主要是通過MoreLikeThis來實現的,它有很多不足的地方,只能期待後續版本。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章