Lucene系列九:猜你喜欢

在电商、新闻媒体等领域经常会遇到这样的需求,根据用户搜索的内容推荐它可能感兴趣的内容。为了避免话题延伸太广,什么用户画像模型及构建这些不做论述,本文仅从这类需求中挑选其中一种“找出某篇文档的相似文档,常见于类似新闻相关文章”,这个其实就是完全是基于内容的分析,也就是相似检索。

背景

MoreLikeThis是Lucene的一个相似搜索组件。它产生的背景:

Generate "more like this" similarity queries. Based on this mail:

       Lucene does let you access the document frequency of terms, with IndexReader.docFreq().Term frequencies can be computed by re-tokenizing the text, which, for a single document,is usually fast enough.  But looking up the docFreq() of every term in the document is probably too slow.

       You can use some heuristics to prune the set of terms, to avoid calling docFreq() too much, or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested in terms with a high tf. Choosing a tf threshold even as low as two or three will radically reduce the number of terms under consideration.  Another heuristic is that terms with a high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the number of characters, not selecting anything less than, e.g., six or seven characters.With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms that do a pretty good job of characterizing a document.

        It all depends on what you're trying to do.  If you're trying to eek out that last percent of precision and recall egardless of computational difficulty so that you can win a TREC  competition, then the techniques I mention above are useless.  But if you're trying to provide a "more like this" button on a search results page that does a decent job and has good performance, such techniques might be useful.

       An efficient, effective "more-like-this" query generator would be a great contribution, if  anyone's interested.  I'd imagine that it would take a Reader or a String (the document's  text), analyzer Analyzer, and return a set of epresentative terms using heuristics like those  above.  The frequency and length thresholds could be parameters, etc.
 Doug

上面这段话非常重要(核心思想),透露了好几层意思,这里我归纳总结一下:

  • Lucene允许您使用IndexReader.docFreq()访问某个词在所有文档频率(出现次数)
  • docFreq()在单个文档中调用非常快,为了避免过多地调用它,通过添加词频和文档频率加以限制
  • 通过调整词频和文档频率就可以做到“more like this”
  • 它支持Lucene中某个字段或单独一个Reader或String进行输入项
  • 它需要用到分析器Analyzer和similarity(打分)

如何使用呢?

你可以把它视为一个工具类(可以生成query语句),对,在战略上藐视它,它语法结构如下:

IndexReader ir = ...
IndexSearcher is = ...
MoreLikeThis mlt = new MoreLikeThis(ir);
Reader target = ... // orig source of doc you want to find similarities to
Query query = mlt.like( target);
Hits hits = is.search(query);

MoreLikeThis重要的参数:

  • private Analyzer analyzer             分词器
  • private int minTermFreq                 词频最小值(默认2)
  • private int minDocFreq                   文档频率最小值(默认5)
  • private int maxDocFreq                  文档频率最大值(默认2147483647)
  • private int maxQueryTerms             查询词的数组大小(默认25)
  • private TFIDFSimilarity similarity     计算相关度

重要的方法:

  • public Query like(int docNum)                                                   按lucene的documentId查询
  • public Query like(String fieldName, Reader... readers)             按string类型输入查询

示例代码

public class ScoreSort_Test {
   @Test
    public void go_explain() throws IOException {
        Directory directory = new RAMDirectory();
        IndexWriterConfig config = new IndexWriterConfig();
        config.setUseCompoundFile(true);
        IndexWriter writer = new IndexWriter(directory, config);
        String feildName = "title";
        Field f1 = new TextField(feildName, "life", Field.Store.YES);
        Field f2 = new TextField(feildName, "work", Field.Store.YES);
        Field f3 = new TextField(feildName, "easy for any of us", Field.Store.YES);
        TextField f4 = new TextField(feildName, "above believe us", Field.Store.YES);
        Document doc1 = new Document();
        Document doc2 = new Document();
        Document doc3 = new Document();
        Document doc4 = new Document();
        doc1.add(f1);
        doc2.add(f2);
        doc3.add(f3);
        doc4.add(f4);
        writer.addDocument(doc1);
        writer.addDocument(doc2);
        writer.addDocument(doc3);
        writer.addDocument(doc4);
        writer.close();
        IndexReader reader = DirectoryReader.open(directory);
        IndexSearcher searcher = new IndexSearcher(reader);
        MoreLikeThis mlt = new MoreLikeThis(reader);
        Analyzer analyzer = new StandardAnalyzer();
        mlt.setAnalyzer(analyzer);//必须支持,否则java.lang.UnsupportedOperationException: To use MoreLikeThis without term vectors, you must provide an Analyzer
        mlt.setFieldNames(new String[]{feildName});//用于计算的字段
        mlt.setMinTermFreq(1); // 默认值是2
        mlt.setMinDocFreq(1); // 默认值是5
        int maxDoc = reader.maxDoc();
        System.out.println("numDocs:" + maxDoc);
        //Query query = mlt.like(docID);
        Query query = mlt.like(feildName, new StringReader("Life is not easy for any of us. We must work,and above all we must believe in ourselves .We must believe ..."));
        System.out.println("query:" + query);
        System.out.println("believe docFreq:" + reader.docFreq(new Term(feildName, "believe")));
        System.out.println("us docFreq:" + reader.docFreq(new Term(feildName, "us")));
        TopDocs topDocs = searcher.search(query, 10);
        ScoreDoc[] scoreDocs = topDocs.scoreDocs;
        for (ScoreDoc scoreDoc : scoreDocs) {
            Document document = searcher.doc(scoreDoc.doc);
            System.out.print("相关度:" + scoreDoc.score);
            System.out.print("  ");
            System.out.print(document);
            System.out.println();
        }
        reader.close();
        directory.close();
    }

    public class StringReader extends Reader {
        private int pos = 0, size = 0;
        private String s = null;

        public StringReader(String s) {
            setValue(s);
        }
        void setValue(String s) {
            this.s = s;
            this.size = s.length();
            this.pos = 0;
        }
        @Override
        public int read() {
            if (pos < size) {
                return s.charAt(pos++);
            } else {
                s = null;
                return -1;
            }
        }
        @Override
        public int read(char[] c, int off, int len) {
            if (pos < size) {
                len = Math.min(len, size - pos);
                s.getChars(pos, pos + len, c, off);
                pos += len;
                return len;
            } else {
                s = null;
                return -1;
            }
        }
        @Override
        public void close() {
            pos = size; // this prevents NPE when reading after close!
            s = null;
        }
    }

}

numDocs:4
query:title:us title:life title:above title:any title:easy title:work title:believe
believe docFreq:1
us docFreq:2
相关度:2.574492  Document<stored,indexed,tokenized<title:easy for any of us>>
相关度:2.574492  Document<stored,indexed,tokenized<title:above believe us>>
相关度:1.5135659  Document<stored,indexed,tokenized<title:life>>
相关度:1.5135659  Document<stored,indexed,tokenized<title:work>>

解释一下:query就是最终的结果,us(docFreq=2,termFreq=1),believe(docFreq=1,termFreq=2)

源码解析

//引自org.apache.lucene.queries.mlt.MoreLikeThis
public final class MoreLikeThis {

   public Query like(String fieldName, Reader... readers) throws IOException {
        HashMap perFieldTermFrequencies = new HashMap();
        Reader[] var4 = readers;
        int var5 = readers.length;

        for(int var6 = 0; var6 < var5; ++var6) {
            Reader r = var4[var6];
            this.addTermFrequencies((Reader)r, (Map)perFieldTermFrequencies, fieldName);
        }

        return this.createQuery(this.createQueue(perFieldTermFrequencies));
    }
    //step1:分词并计算词频,保存到词频数组中perFieldTermFrequencies
    private void addTermFrequencies(Reader r, Map<String, Map<String, MoreLikeThis.Int>> perFieldTermFrequencies, String fieldName) throws IOException {
        if(this.analyzer == null) {
            throw new UnsupportedOperationException("To use MoreLikeThis without term vectors, you must provide an Analyzer");
        } else {
            Object termFreqMap = (Map)perFieldTermFrequencies.get(fieldName);
            if(termFreqMap == null) {
                termFreqMap = new HashMap();
                perFieldTermFrequencies.put(fieldName, termFreqMap);
            }

            TokenStream ts = this.analyzer.tokenStream(fieldName, r);
            Throwable var6 = null;

            try {
                int tokenCount = 0;
                CharTermAttribute termAtt = (CharTermAttribute)ts.addAttribute(CharTermAttribute.class);
                ts.reset();

                while(ts.incrementToken()) {
                    String word = termAtt.toString();
                    ++tokenCount;
                    if(tokenCount > this.maxNumTokensParsed) {
                        break;
                    }

                    if(!this.isNoiseWord(word)) {
                        MoreLikeThis.Int cnt = (MoreLikeThis.Int)((Map)termFreqMap).get(word);
                        if(cnt == null) {
                            ((Map)termFreqMap).put(word, new MoreLikeThis.Int());
                        } else {
                            ++cnt.x;
                        }
                    }
                }

                ts.end();
            } catch (Throwable var18) {
                var6 = var18;
                throw var18;
            } finally {
                if(ts != null) {
                    if(var6 != null) {
                        try {
                            ts.close();
                        } catch (Throwable var17) {
                            var6.addSuppressed(var17);
                        }
                    } else {
                        ts.close();
                    }
                }

            }

        }
    }
    //step2:核心实现,对词频数组一一过滤,满足要求的词频计算score
    private PriorityQueue<MoreLikeThis.ScoreTerm> createQueue(Map<String, Map<String, MoreLikeThis.Int>> perFieldTermFrequencies) throws IOException {
        int numDocs = this.ir.numDocs();
        int limit = Math.min(this.maxQueryTerms, this.getTermsCount(perFieldTermFrequencies));
        MoreLikeThis.FreqQ queue = new MoreLikeThis.FreqQ(limit);
        Iterator var5 = perFieldTermFrequencies.entrySet().iterator();

        label48:
        while(var5.hasNext()) {
            Entry entry = (Entry)var5.next();
            Map perWordTermFrequencies = (Map)entry.getValue();
            String fieldName = (String)entry.getKey();
            Iterator var9 = perWordTermFrequencies.entrySet().iterator();

            while(true) {
                String word;
                int tf;
                int docFreq;
                do {
                    do {
                        if(!var9.hasNext()) {
                            continue label48;
                        }

                        Entry tfEntry = (Entry)var9.next();
                        word = (String)tfEntry.getKey();
                        tf = ((MoreLikeThis.Int)tfEntry.getValue()).x;
                    } while(this.minTermFreq > 0 && tf < this.minTermFreq);

                    docFreq = this.ir.docFreq(new Term(fieldName, word));
                } while(this.minDocFreq > 0 && docFreq < this.minDocFreq);

                if(docFreq <= this.maxDocFreq && docFreq != 0) {
                    float idf = this.similarity.idf((long)docFreq, (long)numDocs);
                    float score = (float)tf * idf;
                    if(queue.size() < limit) {
                        queue.add(new MoreLikeThis.ScoreTerm(word, fieldName, score, idf, docFreq, tf));
                    } else {
                        MoreLikeThis.ScoreTerm term = (MoreLikeThis.ScoreTerm)queue.top();
                        if(term.score < score) {
                            term.update(word, fieldName, score, idf, docFreq, tf);
                            queue.updateTop();
                        }
                    }
                }
            }
        }

        return queue;
    }
    //step3:生成query语句(之前“Lucene系列七:搜索过程和IndexSearcher”讲过的)
    private Query createQuery(PriorityQueue<MoreLikeThis.ScoreTerm> q) {
        Builder query = new Builder();
        float bestScore = -1.0F;

        MoreLikeThis.ScoreTerm scoreTerm;
        while((scoreTerm = (MoreLikeThis.ScoreTerm)q.pop()) != null) {
            Object tq = new TermQuery(new Term(scoreTerm.topField, scoreTerm.word));
            if(this.boost) {
                if(bestScore == -1.0F) {
                    bestScore = scoreTerm.score;
                }

                float ignore = scoreTerm.score;
                tq = new BoostQuery((Query)tq, this.boostFactor * ignore / bestScore);
            }

            try {
                query.add((Query)tq, Occur.SHOULD);
            } catch (TooManyClauses var7) {
                break;
            }
        }

        return query.build();
    }
}

总结,“猜你喜欢”这类需要,lucene对它的支持目前比较有限,主要是通过MoreLikeThis来实现的,它有很多不足的地方,只能期待后续版本。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章