org.apache.lucene.search.similarities.TFIDFSimilarity Maven / Gradle / Ivy
Show all versions of aem-sdk-api Show documentation
/*
* COPIED FROM APACHE LUCENE 4.7.2
*
* Git URL: [email protected]:apache/lucene.git, tag: releases/lucene-solr/4.7.2, path: lucene/core/src/java
*
* (see https://issues.apache.org/jira/browse/OAK-10786 for details)
*/
package org.apache.lucene.search.similarities;
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.io.IOException;
import org.apache.lucene.index.AtomicReaderContext;
import org.apache.lucene.index.FieldInvertState;
import org.apache.lucene.index.NumericDocValues;
import org.apache.lucene.search.CollectionStatistics;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.PhraseQuery;
import org.apache.lucene.search.TermStatistics;
import org.apache.lucene.util.BytesRef;
/**
* Implementation of {@link Similarity} with the Vector Space Model.
*
* Expert: Scoring API.
*
TFIDFSimilarity defines the components of Lucene scoring.
* Overriding computation of these components is a convenient
* way to alter Lucene scoring.
*
*
Suggested reading:
*
* Introduction To Information Retrieval, Chapter 6.
*
*
The following describes how Lucene scoring evolves from
* underlying information retrieval models to (efficient) implementation.
* We first brief on VSM Score,
* then derive from it Lucene's Conceptual Scoring Formula,
* from which, finally, evolves Lucene's Practical Scoring Function
* (the latter is connected directly with Lucene classes and methods).
*
*
Lucene combines
*
* Boolean model (BM) of Information Retrieval
* with
*
* Vector Space Model (VSM) of Information Retrieval -
* documents "approved" by BM are scored by VSM.
*
*
In VSM, documents and queries are represented as
* weighted vectors in a multi-dimensional space,
* where each distinct index term is a dimension,
* and weights are
* Tf-idf values.
*
*
VSM does not require weights to be Tf-idf values,
* but Tf-idf values are believed to produce search results of high quality,
* and so Lucene is using Tf-idf.
* Tf and Idf are described in more detail below,
* but for now, for completion, let's just say that
* for given term t and document (or query) x,
* Tf(t,x) varies with the number of occurrences of term t in x
* (when one increases so does the other) and
* idf(t) similarly varies with the inverse of the
* number of index documents containing term t.
*
*
VSM score of document d for query q is the
*
* Cosine Similarity
* of the weighted query vectors V(q) and V(d):
*
*
*
*
*
*
*
*
*
* cosine-similarity(q,d) =
*
*
*
* V(q) · V(d)
* –––––––––
* |V(q)| |V(d)|
*
*
*
*
*
*
*
*
* VSM Score
*
*
*
*
*
* Where V(q) · V(d) is the
* dot product
* of the weighted vectors,
* and |V(q)| and |V(d)| are their
* Euclidean norms.
*
* Note: the above equation can be viewed as the dot product of
* the normalized weighted vectors, in the sense that dividing
* V(q) by its euclidean norm is normalizing it to a unit vector.
*
*
Lucene refines VSM score for both search quality and usability:
*
* - Normalizing V(d) to the unit vector is known to be problematic in that
* it removes all document length information.
* For some documents removing this info is probably ok,
* e.g. a document made by duplicating a certain paragraph 10 times,
* especially if that paragraph is made of distinct terms.
* But for a document which contains no duplicated paragraphs,
* this might be wrong.
* To avoid this problem, a different document length normalization
* factor is used, which normalizes to a vector equal to or larger
* than the unit vector: doc-len-norm(d).
*
*
* - At indexing, users can specify that certain documents are more
* important than others, by assigning a document boost.
* For this, the score of each document is also multiplied by its boost value
* doc-boost(d).
*
*
* - Lucene is field based, hence each query term applies to a single
* field, document length normalization is by the length of the certain field,
* and in addition to document boost there are also document fields boosts.
*
*
* - The same field can be added to a document during indexing several times,
* and so the boost of that field is the multiplication of the boosts of
* the separate additions (or parts) of that field within the document.
*
*
* - At search time users can specify boosts to each query, sub-query, and
* each query term, hence the contribution of a query term to the score of
* a document is multiplied by the boost of that query term query-boost(q).
*
*
* - A document may match a multi term query without containing all
* the terms of that query (this is correct for some of the queries),
* and users can further reward documents matching more query terms
* through a coordination factor, which is usually larger when
* more terms are matched: coord-factor(q,d).
*
*
*
* Under the simplifying assumption of a single field in the index,
* we get Lucene's Conceptual scoring formula:
*
*
*
*
*
*
*
*
*
* score(q,d) =
* coord-factor(q,d) ·
* query-boost(q) ·
*
*
*
* V(q) · V(d)
* –––––––––
* |V(q)|
*
*
*
* · doc-len-norm(d)
* · doc-boost(d)
*
*
*
*
*
*
*
* Lucene Conceptual Scoring Formula
*
*
*
*
* The conceptual formula is a simplification in the sense that (1) terms and documents
* are fielded and (2) boosts are usually per query term rather than per query.
*
*
We now describe how Lucene implements this conceptual scoring formula, and
* derive from it Lucene's Practical Scoring Function.
*
*
For efficient score computation some scoring components
* are computed and aggregated in advance:
*
*
* - Query-boost for the query (actually for each query term)
* is known when search starts.
*
*
* - Query Euclidean norm |V(q)| can be computed when search starts,
* as it is independent of the document being scored.
* From search optimization perspective, it is a valid question
* why bother to normalize the query at all, because all
* scored documents will be multiplied by the same |V(q)|,
* and hence documents ranks (their order by score) will not
* be affected by this normalization.
* There are two good reasons to keep this normalization:
*
* - Recall that
*
* Cosine Similarity can be used find how similar
* two documents are. One can use Lucene for e.g.
* clustering, and use a document as a query to compute
* its similarity to other documents.
* In this use case it is important that the score of document d3
* for query d1 is comparable to the score of document d3
* for query d2. In other words, scores of a document for two
* distinct queries should be comparable.
* There are other applications that may require this.
* And this is exactly what normalizing the query vector V(q)
* provides: comparability (to a certain extent) of two or more queries.
*
*
* - Applying query normalization on the scores helps to keep the
* scores around the unit vector, hence preventing loss of score data
* because of floating point precision limitations.
*
*
*
*
* - Document length norm doc-len-norm(d) and document
* boost doc-boost(d) are known at indexing time.
* They are computed in advance and their multiplication
* is saved as a single value in the index: norm(d).
* (In the equations below, norm(t in d) means norm(field(t) in doc d)
* where field(t) is the field associated with term t.)
*
*
*
* Lucene's Practical Scoring Function is derived from the above.
* The color codes demonstrate how it relates
* to those of the conceptual formula:
*
*
*
*
*
*
*
*
*
* score(q,d) =
* coord(q,d) ·
* queryNorm(q) ·
*
*
* ∑
*
*
* (
* tf(t in d) ·
* idf(t)2 ·
* t.getBoost() ·
* norm(t,d)
* )
*
*
*
*
* t in q
*
*
*
*
*
*
*
* Lucene Practical Scoring Function
*
*
*
* where
*
* -
*
* tf(t in d)
* correlates to the term's frequency,
* defined as the number of times term t appears in the currently scored document d.
* Documents that have more occurrences of a given term receive a higher score.
* Note that tf(t in q) is assumed to be 1 and therefore it does not appear in this equation,
* However if a query contains twice the same term, there will be
* two term-queries with that same term and hence the computation would still be correct (although
* not very efficient).
* The default computation for tf(t in d) in
* {@link org.apache.lucene.search.similarities.DefaultSimilarity#tf(float) DefaultSimilarity} is:
*
*
*
*
*
* {@link org.apache.lucene.search.similarities.DefaultSimilarity#tf(float) tf(t in d)} =
*
*
* frequency½
*
*
*
*
*
*
* -
*
* idf(t) stands for Inverse Document Frequency. This value
* correlates to the inverse of docFreq
* (the number of documents in which the term t appears).
* This means rarer terms give higher contribution to the total score.
* idf(t) appears for t in both the query and the document,
* hence it is squared in the equation.
* The default computation for idf(t) in
* {@link org.apache.lucene.search.similarities.DefaultSimilarity#idf(long, long) DefaultSimilarity} is:
*
*
*
*
*
* {@link org.apache.lucene.search.similarities.DefaultSimilarity#idf(long, long) idf(t)} =
*
*
* 1 + log (
*
*
*
* numDocs
* –––––––––
* docFreq+1
*
*
*
* )
*
*
*
*
*
*
* -
*
* coord(q,d)
* is a score factor based on how many of the query terms are found in the specified document.
* Typically, a document that contains more of the query's terms will receive a higher score
* than another document with fewer query terms.
* This is a search time factor computed in
* {@link #coord(int, int) coord(q,d)}
* by the Similarity in effect at search time.
*
*
*
* -
*
* queryNorm(q)
*
* is a normalizing factor used to make scores between queries comparable.
* This factor does not affect document ranking (since all ranked documents are multiplied by the same factor),
* but rather just attempts to make scores from different queries (or even different indexes) comparable.
* This is a search time factor computed by the Similarity in effect at search time.
*
* The default computation in
* {@link org.apache.lucene.search.similarities.DefaultSimilarity#queryNorm(float) DefaultSimilarity}
* produces a Euclidean norm:
*
*
*
*
* queryNorm(q) =
* {@link org.apache.lucene.search.similarities.DefaultSimilarity#queryNorm(float) queryNorm(sumOfSquaredWeights)}
* =
*
*
*
* 1
*
* ––––––––––––––
*
* sumOfSquaredWeights½
*
*
*
*
*
*
* The sum of squared weights (of the query terms) is
* computed by the query {@link org.apache.lucene.search.Weight} object.
* For example, a {@link org.apache.lucene.search.BooleanQuery}
* computes this value as:
*
*
*
*
*
* {@link org.apache.lucene.search.Weight#getValueForNormalization() sumOfSquaredWeights} =
* {@link org.apache.lucene.search.Query#getBoost() q.getBoost()} 2
* ·
*
*
* ∑
*
*
* (
* idf(t) ·
* t.getBoost()
* ) 2
*
*
*
*
* t in q
*
*
*
*
*
*
*
* -
*
* t.getBoost()
* is a search time boost of term t in the query q as
* specified in the query text
* (see query syntax),
* or as set by application calls to
* {@link org.apache.lucene.search.Query#setBoost(float) setBoost()}.
* Notice that there is really no direct API for accessing a boost of one term in a multi term query,
* but rather multi terms are represented in a query as multi
* {@link org.apache.lucene.search.TermQuery TermQuery} objects,
* and so the boost of a term in the query is accessible by calling the sub-query
* {@link org.apache.lucene.search.Query#getBoost() getBoost()}.
*
*
*
* -
*
* norm(t,d) encapsulates a few (indexing time) boost and length factors:
*
*
* - Field boost - set by calling
* {@link org.apache.lucene.document.Field#setBoost(float) field.setBoost()}
* before adding the field to a document.
*
* - lengthNorm - computed
* when the document is added to the index in accordance with the number of tokens
* of this field in the document, so that shorter fields contribute more to the score.
* LengthNorm is computed by the Similarity class in effect at indexing.
*
*
* The {@link #computeNorm} method is responsible for
* combining all of these factors into a single float.
*
*
* When a document is added to the index, all the above factors are multiplied.
* If the document has multiple fields with the same name, all their boosts are multiplied together:
*
*
*
*
*
* norm(t,d) =
* lengthNorm
* ·
*
*
* ∏
*
*
* {@link org.apache.lucene.index.IndexableField#boost() f.boost}()
*
*
*
*
* field f in d named as t
*
*
*
* Note that search time is too late to modify this norm part of scoring,
* e.g. by using a different {@link Similarity} for search.
*
*
*
* @see org.apache.lucene.index.IndexWriterConfig#setSimilarity(Similarity)
* @see IndexSearcher#setSimilarity(Similarity)
*/
public abstract class TFIDFSimilarity extends Similarity {
/**
* Sole constructor. (For invocation by subclass
* constructors, typically implicit.)
*/
public TFIDFSimilarity() {}
/** Computes a score factor based on the fraction of all query terms that a
* document contains. This value is multiplied into scores.
*
* The presence of a large portion of the query terms indicates a better
* match with the query, so implementations of this method usually return
* larger values when the ratio between these parameters is large and smaller
* values when the ratio between them is small.
*
* @param overlap the number of query terms matched in the document
* @param maxOverlap the total number of terms in the query
* @return a score factor based on term overlap with the query
*/
@Override
public abstract float coord(int overlap, int maxOverlap);
/** Computes the normalization value for a query given the sum of the squared
* weights of each of the query terms. This value is multiplied into the
* weight of each query term. While the classic query normalization factor is
* computed as 1/sqrt(sumOfSquaredWeights), other implementations might
* completely ignore sumOfSquaredWeights (ie return 1).
*
*
This does not affect ranking, but the default implementation does make scores
* from different queries more comparable than they would be by eliminating the
* magnitude of the Query vector as a factor in the score.
*
* @param sumOfSquaredWeights the sum of the squares of query term weights
* @return a normalization factor for query weights
*/
@Override
public abstract float queryNorm(float sumOfSquaredWeights);
/** Computes a score factor based on a term or phrase's frequency in a
* document. This value is multiplied by the {@link #idf(long, long)}
* factor for each term in the query and these products are then summed to
* form the initial score for a document.
*
*
Terms and phrases repeated in a document indicate the topic of the
* document, so implementations of this method usually return larger values
* when freq
is large, and smaller values when freq
* is small.
*
* @param freq the frequency of a term within a document
* @return a score factor based on a term's within-document frequency
*/
public abstract float tf(float freq);
/**
* Computes a score factor for a simple term and returns an explanation
* for that score factor.
*
*
* The default implementation uses:
*
*
* idf(docFreq, searcher.maxDoc());
*
*
* Note that {@link CollectionStatistics#maxDoc()} is used instead of
* {@link org.apache.lucene.index.IndexReader#numDocs() IndexReader#numDocs()} because also
* {@link TermStatistics#docFreq()} is used, and when the latter
* is inaccurate, so is {@link CollectionStatistics#maxDoc()}, and in the same direction.
* In addition, {@link CollectionStatistics#maxDoc()} is more efficient to compute
*
* @param collectionStats collection-level statistics
* @param termStats term-level statistics for the term
* @return an Explain object that includes both an idf score factor
and an explanation for the term.
*/
public Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics termStats) {
final long df = termStats.docFreq();
final long max = collectionStats.maxDoc();
final float idf = idf(df, max);
return new Explanation(idf, "idf(docFreq=" + df + ", maxDocs=" + max + ")");
}
/**
* Computes a score factor for a phrase.
*
*
* The default implementation sums the idf factor for
* each term in the phrase.
*
* @param collectionStats collection-level statistics
* @param termStats term-level statistics for the terms in the phrase
* @return an Explain object that includes both an idf
* score factor for the phrase and an explanation
* for each term.
*/
public Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics termStats[]) {
final long max = collectionStats.maxDoc();
float idf = 0.0f;
final Explanation exp = new Explanation();
exp.setDescription("idf(), sum of:");
for (final TermStatistics stat : termStats ) {
final long df = stat.docFreq();
final float termIdf = idf(df, max);
exp.addDetail(new Explanation(termIdf, "idf(docFreq=" + df + ", maxDocs=" + max + ")"));
idf += termIdf;
}
exp.setValue(idf);
return exp;
}
/** Computes a score factor based on a term's document frequency (the number
* of documents which contain the term). This value is multiplied by the
* {@link #tf(float)} factor for each term in the query and these products are
* then summed to form the initial score for a document.
*
*
Terms that occur in fewer documents are better indicators of topic, so
* implementations of this method usually return larger values for rare terms,
* and smaller values for common terms.
*
* @param docFreq the number of documents which contain the term
* @param numDocs the total number of documents in the collection
* @return a score factor based on the term's document frequency
*/
public abstract float idf(long docFreq, long numDocs);
/**
* Compute an index-time normalization value for this field instance.
*
* This value will be stored in a single byte lossy representation by
* {@link #encodeNormValue(float)}.
*
* @param state statistics of the current field (such as length, boost, etc)
* @return an index-time normalization value
*/
public abstract float lengthNorm(FieldInvertState state);
@Override
public final long computeNorm(FieldInvertState state) {
float normValue = lengthNorm(state);
return encodeNormValue(normValue);
}
/**
* Decodes a normalization factor stored in an index.
*
* @see #encodeNormValue(float)
*/
public abstract float decodeNormValue(long norm);
/** Encodes a normalization factor for storage in an index. */
public abstract long encodeNormValue(float f);
/** Computes the amount of a sloppy phrase match, based on an edit distance.
* This value is summed for each sloppy phrase match in a document to form
* the frequency to be used in scoring instead of the exact term count.
*
*
A phrase match with a small edit distance to a document passage more
* closely matches the document, so implementations of this method usually
* return larger values when the edit distance is small and smaller values
* when it is large.
*
* @see PhraseQuery#setSlop(int)
* @param distance the edit distance of this sloppy phrase match
* @return the frequency increment for this match
*/
public abstract float sloppyFreq(int distance);
/**
* Calculate a scoring factor based on the data in the payload. Implementations
* are responsible for interpreting what is in the payload. Lucene makes no assumptions about
* what is in the byte array.
*
* @param doc The docId currently being scored.
* @param start The start position of the payload
* @param end The end position of the payload
* @param payload The payload byte array to be scored
* @return An implementation dependent float to be used as a scoring factor
*/
public abstract float scorePayload(int doc, int start, int end, BytesRef payload);
@Override
public final SimWeight computeWeight(float queryBoost, CollectionStatistics collectionStats, TermStatistics... termStats) {
final Explanation idf = termStats.length == 1
? idfExplain(collectionStats, termStats[0])
: idfExplain(collectionStats, termStats);
return new IDFStats(collectionStats.field(), idf, queryBoost);
}
@Override
public final SimScorer simScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
IDFStats idfstats = (IDFStats) stats;
return new TFIDFSimScorer(idfstats, context.reader().getNormValues(idfstats.field));
}
private final class TFIDFSimScorer extends SimScorer {
private final IDFStats stats;
private final float weightValue;
private final NumericDocValues norms;
TFIDFSimScorer(IDFStats stats, NumericDocValues norms) throws IOException {
this.stats = stats;
this.weightValue = stats.value;
this.norms = norms;
}
@Override
public float score(int doc, float freq) {
final float raw = tf(freq) * weightValue; // compute tf(f)*weight
return norms == null ? raw : raw * decodeNormValue(norms.get(doc)); // normalize for field
}
@Override
public float computeSlopFactor(int distance) {
return sloppyFreq(distance);
}
@Override
public float computePayloadFactor(int doc, int start, int end, BytesRef payload) {
return scorePayload(doc, start, end, payload);
}
@Override
public Explanation explain(int doc, Explanation freq) {
return explainScore(doc, freq, stats, norms);
}
}
/** Collection statistics for the TF-IDF model. The only statistic of interest
* to this model is idf. */
private static class IDFStats extends SimWeight {
private final String field;
/** The idf and its explanation */
private final Explanation idf;
private float queryNorm;
private float queryWeight;
private final float queryBoost;
private float value;
public IDFStats(String field, Explanation idf, float queryBoost) {
// TODO: Validate?
this.field = field;
this.idf = idf;
this.queryBoost = queryBoost;
this.queryWeight = idf.getValue() * queryBoost; // compute query weight
}
@Override
public float getValueForNormalization() {
// TODO: (sorta LUCENE-1907) make non-static class and expose this squaring via a nice method to subclasses?
return queryWeight * queryWeight; // sum of squared weights
}
@Override
public void normalize(float queryNorm, float topLevelBoost) {
this.queryNorm = queryNorm * topLevelBoost;
queryWeight *= this.queryNorm; // normalize query weight
value = queryWeight * idf.getValue(); // idf for document
}
}
private Explanation explainScore(int doc, Explanation freq, IDFStats stats, NumericDocValues norms) {
Explanation result = new Explanation();
result.setDescription("score(doc="+doc+",freq="+freq+"), product of:");
// explain query weight
Explanation queryExpl = new Explanation();
queryExpl.setDescription("queryWeight, product of:");
Explanation boostExpl = new Explanation(stats.queryBoost, "boost");
if (stats.queryBoost != 1.0f)
queryExpl.addDetail(boostExpl);
queryExpl.addDetail(stats.idf);
Explanation queryNormExpl = new Explanation(stats.queryNorm,"queryNorm");
queryExpl.addDetail(queryNormExpl);
queryExpl.setValue(boostExpl.getValue() *
stats.idf.getValue() *
queryNormExpl.getValue());
result.addDetail(queryExpl);
// explain field weight
Explanation fieldExpl = new Explanation();
fieldExpl.setDescription("fieldWeight in "+doc+
", product of:");
Explanation tfExplanation = new Explanation();
tfExplanation.setValue(tf(freq.getValue()));
tfExplanation.setDescription("tf(freq="+freq.getValue()+"), with freq of:");
tfExplanation.addDetail(freq);
fieldExpl.addDetail(tfExplanation);
fieldExpl.addDetail(stats.idf);
Explanation fieldNormExpl = new Explanation();
float fieldNorm = norms != null ? decodeNormValue(norms.get(doc)) : 1.0f;
fieldNormExpl.setValue(fieldNorm);
fieldNormExpl.setDescription("fieldNorm(doc="+doc+")");
fieldExpl.addDetail(fieldNormExpl);
fieldExpl.setValue(tfExplanation.getValue() *
stats.idf.getValue() *
fieldNormExpl.getValue());
result.addDetail(fieldExpl);
// combine them
result.setValue(queryExpl.getValue() * fieldExpl.getValue());
if (queryExpl.getValue() == 1.0f)
return fieldExpl;
return result;
}
}