using System.Collections.Generic;
using lucere.analysis.query;
using lucere.entity;
using lucere.index.inversion;
using lucere.index.read;
using lucere.index.write;
using lucere.io;

namespace lucere.service.search
{
	///<summary>
	/// Expert: Scoring API.
	///
	/// <p>Similarity defines the components of Lucene scoring.
	/// Overriding computation of these components is a convenient
	/// way to alter Lucene scoring.
	/// </p>
	/// <p>Suggested reading:
	/// <a href="http://nlp.stanford.edu/IR-book/html/htmledition/queries-as-vectors-1.html">
	/// Introduction To Information Retrieval, Chapter 6</a>.
	///</p>
	/// <p>The following describes how Lucene scoring evolves from
	/// underlying information retrieval models to (efficient) implementation.
	/// We first brief on <i>VSM Score</i>, 
	/// then derive from it <i>Lucene's Conceptual Scoring Formula</i>,
	/// from which, finally, evolves <i>Lucene's Practical Scoring Function</i> 
	/// (the latter is connected directly with Lucene classes and methods).    
	///</p>
	/// <p>Lucene combines
	/// <a href="http://en.wikipedia.org/wiki/Standard_Boolean_model">
	/// Boolean model (BM) of Information Retrieval</a>
	/// with
	/// <a href="http://en.wikipedia.org/wiki/Vector_Space_Model">
	/// Vector Space Model (VSM) of Information Retrieval</a> -
	/// documents "approved" by BM are scored by VSM.
	///</p>
	/// <p>In VSM, documents and queries are represented as
	/// weighted vectors in a multi-dimensional space,
	/// where each distinct index term is a dimension,
	/// and weights are
	/// <a href="http://en.wikipedia.org/wiki/Tfidf">Tf-idf</a> values.
	///</p>
	/// <p>VSM does not require weights to be <i>Tf-idf</i> values,
	/// but <i>Tf-idf</i> values are believed to produce search results of high quality,
	/// and so Lucene is using <i>Tf-idf</i>.
	/// <i>Tf</i> and <i>Idf</i> are described in more detail below,
	/// but for now, for completion, let's just say that
	/// for given term <i>t</i> and document (or query) <i>x</i>,
	/// <i>Tf(t,x)</i> varies with the number of occurrences of term <i>t</i> in <i>x</i>
	/// (when one increases so does the other) and
	/// <i>idf(t)</i> similarly varies with the inverse of the
	/// number of index documents containing term <i>t</i>.
	///</p>
	/// <p><i>VSM score</i> of document <i>d</i> for query <i>q</i> is the
	/// <a href="http://en.wikipedia.org/wiki/Cosine_similarity">
	/// Cosine Similarity</a>
	/// of the weighted query vectors <i>V(q)</i> and <i>V(d)</i>:
	///</p>
	///  <br/><br/>
	/// Where <i>V(q)</i>  <i>V(d)</i> is the
	/// <a href="http://en.wikipedia.org/wiki/Dot_product">dot product</a>
	/// of the weighted vectors,
	/// and <i>|V(q)|</i> and <i>|V(d)|</i> are their
	/// <a href="http://en.wikipedia.org/wiki/Euclidean_norm#Euclidean_norm">Euclidean norms</a>.
	///
	/// <p>Note: the above equation can be viewed as the dot product of
	/// the normalized weighted vectors, in the sense that dividing
	/// <i>V(q)</i> by its euclidean norm is normalizing it to a unit vector.
	///</p>
	/// <p>Lucene refines <i>VSM score</i> for both search quality and usability:
	/// <ul>
	///  <li>Normalizing <i>V(d)</i> to the unit vector is known to be problematic in that 
	///  it removes all document length information. 
	///  For some documents removing this info is probably ok, 
	///  e.g. a document made by duplicating a certain paragraph <i>10</i> times,
	///  especially if that paragraph is made of distinct terms. 
	///  But for a document which contains no duplicated paragraphs, 
	///  this might be wrong. 
	///  To avoid this problem, a different document length normalization 
	///  factor is used, which normalizes to a vector equal to or larger 
	///  than the unit vector: <i>doc-len-norm(d)</i>.
	///  </li>
	///
	///  <li>At indexing, users can specify that certain documents are more
	///  important than others, by assigning a document boost.
	///  For this, the score of each document is also multiplied by its boost value
	///  <i>doc-boost(d)</i>.
	///  </li>
	///
	///  <li>Lucene is field based, hence each query term applies to a single
	///  field, document length normalization is by the length of the certain field,
	///  and in addition to document boost there are also document fields boosts.
	///  </li>
	///
	///  <li>The same field can be added to a document during indexing several times,
	///  and so the boost of that field is the multiplication of the boosts of
	///  the separate additions (or parts) of that field within the document.
	///  </li>
	///
	///  <li>At search time users can specify boosts to each query, sub-query, and
	///  each query term, hence the contribution of a query term to the score of
	///  a document is multiplied by the boost of that query term <i>query-boost(q)</i>.
	///  </li>
	///
	///  <li>A document may match a multi term query without containing all
	///  the terms of that query (this is correct for some of the queries),
	///  and users can further reward documents matching more query terms
	///  through a coordination factor, which is usually larger when
	///  more terms are matched: <i>coord-factor(q,d)</i>.
	///  </li>
	/// </ul>
	///</p>
	/// <p>Under the simplifying assumption of a single field in the index,
	/// we get <i>Lucene's Conceptual scoring formula</i>:
	///  <br/><br/>
	///</p>
	/// <p>The conceptual formula is a simplification in the sense that (1) terms and documents
	/// are fielded and (2) boosts are usually per query term rather than per query.
	///</p>
	/// <p>We now describe how Lucene implements this conceptual scoring formula, and
	/// derive from it <i>Lucene's Practical Scoring Function</i>.
	///  </p>
	/// <p>For efficient score computation some scoring components
	/// are computed and aggregated in advance:
	///</p>
	/// <ul>
	///  <li><i>Query-boost</i> for the query (actually for each query term)
	///  is known when search starts.
	///  </li>
	///
	///  <li>Query Euclidean norm <i>|V(q)|</i> can be computed when search starts,
	///  as it is independent of the document being scored.
	///  From search optimization perspective, it is a valid question
	///  why bother to normalize the query at all, because all
	///  scored documents will be multiplied by the same <i>|V(q)|</i>,
	///  and hence documents ranks (their order by score) will not
	///  be affected by this normalization.
	///  There are two good reasons to keep this normalization:
	///  <ul>
	///   <li>Recall that
	///   <a href="http://en.wikipedia.org/wiki/Cosine_similarity">
	///   Cosine Similarity</a> can be used find how similar
	///   two documents are. One can use Lucene for e.g.
	///   clustering, and use a document as a query to compute
	///   its similarity to other documents.
	///   In this use case it is important that the score of document <i>d3</i>
	///   for query <i>d1</i> is comparable to the score of document <i>d3</i>
	///   for query <i>d2</i>. In other words, scores of a document for two
	///   distinct queries should be comparable.
	///   There are other applications that may require this.
	///   And this is exactly what normalizing the query vector <i>V(q)</i>
	///   provides: comparability (to a certain extent) of two or more queries.
	///   </li>
	///
	///   <li>Applying query normalization on the scores helps to keep the
	///   scores around the unit vector, hence preventing loss of score data
	///   because of floating point precision limitations.
	///   </li>
	///  </ul>
	///  </li>
	///
	///  <li>Document length norm <i>doc-len-norm(d)</i> and document
	///  boost <i>doc-boost(d)</i> are known at indexing time.
	///  They are computed in advance and their multiplication
	///  is saved as a single value in the index: <i>norm(d)</i>.
	///  (In the equations below, <i>norm(t in d)</i> means <i>norm(field(t) in doc d)</i>
	///  where <i>field(t)</i> is the field associated with term <i>t</i>.)
	///  </li>
	/// </ul>
	/// <see cref="Default"></see>
	/// <see cref="IIndexWriter.Similarity(ISimilarity)"></see>
	/// <see cref="ISearcher.Similarity"></see>
	///</summary>
	public interface ISimilarity
	{
		///<summary>
		///Set the default Similarity implementation used by indexing and search
		///code.
		/// <see cref="ISearcher.Similarity"></see>
		/// <see cref="IIndexWriter.Similarity"></see>
		/// </summary>
		ISimilarity Default { get; set; }

		///<summary>
		/// Decodes a normalization factor stored in an index.
		/// <see cref="EncodeNorm(float)"></see>
		/// </summary>
		float DecodeNorm(byte b);

		///<summary>
		/// Returns a table for decoding normalization bytes.
		/// <see cref="EncodeNorm(float)"></see>
		/// </summary>
		float[] GetNormDecoder();

		///<summary>
		/// Compute the normalization value for a field, given the accumulated
		/// state of term processing for this field (see <see cref="IFieldInvertState"></see>).
		/// 
		/// Implementations should calculate a float value based on the field
		/// state and then return that value.
		///
		/// For backward compatibility this method by default calls
		/// <see cref="LengthNorm(string, int)"></see> passing
		/// <see cref="IFieldInvertState.Length"></see> as the second argument, and
		/// then multiplies this value by <see cref="IFieldInvertState.Boost"></see>.
		/// 
		/// <p><b>WARNING</b>: This API is new and experimental and may
		/// suddenly change.</p>
		/// 
		/// <param name="field">field name</param> 
		/// <param name="state"> current processing state for this field</param>
		/// <returns>the calculated float norm</returns>
		/// </summary>
		float ComputeNorm(string field, IFieldInvertState state);

		///<summary>
		/// Computes the normalization value for a field given the total number of
		/// terms contained in a field.  These values, together with field boosts, are
		/// stored in an index and multipled into scores for hits on each field by the
		/// search code.
		///
		/// <p>Matches in longer fields are less precise, so implementations of this
		/// method usually return smaller values when <code>numTokens</code> is large,
		/// and larger values when <code>numTokens</code> is small.
		/// </p>
		/// <p>Note that the return values are computed under 
		/// <see cref="IIndexWriter.AddDocument(IDocument)"></see>
		/// and then stored using
		/// <see cref="EncodeNorm(float)"></see>.  
		/// Thus they have limited precision, and documents
		/// must be re-indexed if this method is altered.
		///</p>
		/// <param name="fieldName">the name of the field</param>
		/// <param name="numTokens"> the total number of tokens contained in fields named</param>
		/// <i>fieldName</i> of <i>doc</i>.
		/// <returns>a normalization factor for hits on this field of this document
		///</returns>
		/// <see cref="IField.Boost"></see>
		/// </summary>
		float LengthNorm(string fieldName, int numTokens);

		///<summary>
		/// Computes the normalization value for a query given the sum of the squared
		/// weights of each of the query terms.  This value is multiplied into the
		/// weight of each query term. While the classic query normalization factor is
		/// computed as 1/sqrt(sumOfSquaredWeights), other implementations might
		/// completely ignore sumOfSquaredWeights (ie return 1).
		///
		/// <p>This does not affect ranking, but the default implementation does make scores
		/// from different queries more comparable than they would be by eliminating the
		/// magnitude of the Query vector as a factor in the score.
		/// </p>
		/// <param name="sumOfSquaredWeights">the sum of the squares of query term weights</param>
		/// <returns>a normalization factor for query weights</returns>
		/// </summary>
		float QueryNorm(float sumOfSquaredWeights);

		///<summary>
		/// Encodes a normalization factor for storage in an index.
		/// The encoding uses a three-bit mantissa, a five-bit exponent, and
		/// the zero-exponent point at 15, thus
		/// representing values from around 7x10^9 to 2x10^-9 with about one
		/// significant decimal digit of accuracy.  Zero is also represented.
		/// Negative numbers are rounded up to zero.  Values too large to represent
		/// are rounded down to the largest representable value.  Positive values too
		/// small to represent are rounded up to the smallest positive representable
		/// value.
		///
		/// <see cref="IField.Boost"></see>
		/// </summary>
		byte EncodeNorm(float f);

		///<summary>
		/// Computes a score factor based on a term or phrase's frequency in a
		/// document.  This value is multiplied by the <see cref="Idf(int, int)"></see>
		/// factor for each term in the query and these products are then summed to
		/// form the initial score for a document.
		///
		/// <p>Terms and phrases repeated in a document indicate the topic of the
		/// document, so implementations of this method usually return larger values
		/// when <code>freq</code> is large, and smaller values when <code>freq</code>
		/// is small.
		///</p>
		/// <p>The default implementation calls <see cref="Tf(float)"></see>.
		///</p>
		/// <param name="freq">the frequency of a term within a document</param>
		/// <returns>a score factor based on a term's within-document frequency</returns>
		/// </summary>
		float Tf(int freq);

		///<summary>
		/// Computes the amount of a sloppy phrase match, based on an edit distance.
		/// This value is summed for each sloppy phrase match in a document to form
		/// the frequency that is passed to <see cref="Tf(float)"></see>.
		///
		/// <p>A phrase match with a small edit distance to a document passage more
		/// closely matches the document, so implementations of this method usually
		/// return larger values when the edit distance is small and smaller values
		/// when it is large.
		/// </p>
		/// <see cref="IPhraseQuery.Slop(int)"></see>
		/// <param name="distance">the edit distance of this sloppy phrase match</param>
		/// <returns>the frequency increment for this match</returns>
		/// </summary>
		float SloppyFreq(int distance);

		///<summary>
		/// Computes a score factor based on a term or phrase's frequency in a
		/// document.  This value is multiplied by the <see cref="Idf(int, int)"></see>
		/// factor for each term in the query and these products are then summed to
		/// form the initial score for a document.
		///
		/// <p>Terms and phrases repeated in a document indicate the topic of the
		/// document, so implementations of this method usually return larger values
		/// when <code>freq</code> is large, and smaller values when <code>freq</code>
		/// is small.
		///</p>
		/// <param name="freq">the frequency of a term within a document</param>
		/// <returns>a score factor based on a term's within-document frequency</returns>
		/// </summary>
		float Tf(float freq);

		///<summary>
		/// Computes a score factor for a simple term and returns an explanation
		/// for that score factor.
		/// 
		/// <p>
		/// The default implementation uses:
		/// </p>
		/// <pre>
		/// Idf(searcher.docFreq(term), searcher.MaxDoc());
		/// </pre>
		/// 
		/// Note that <see cref="ISearcher.MaxDoc()"></see> is used instead of
		/// <see cref="IndexReader.NumDocs()"></see> because also 
		/// <see cref="ISearcher.DocFreq(ITerm)"></see> is used, and when the latter 
		/// is inaccurate, so is <see cref="ISearcher.MaxDoc()"></see>, and in the same direction.
		/// In addition, <see cref="ISearcher.MaxDoc()"></see> is more efficient to compute
		///   
		/// <param name="term">the term in question</param>
		/// <param name="searcher">the document collection being searched</param>
		/// <returns>an IDFExplain object that includes both an idf score factor and an explanation for the term.
		/// </returns>
		/// </summary>
		IDFExplanation IdfExplain(ITerm term, ISearcher searcher);

		///////
		///<summary>
		/// Computes a score factor for a phrase.
		/// 
		/// <p>
		/// The default implementation sums the idf factor for
		/// each term in the phrase.
		/// </p>
		/// <param name="terms">the terms in the phrase</param>
		/// <param name="searcher">the document collection being searched</param>
		/// <returns>an IDFExplain object that includes both an idf 
		///score factor for the phrase and an explanation 
		/// for each term.</returns>
		/// </summary>
		////
		IDFExplanation IdfExplain(IList<ITerm> terms, ISearcher searcher);

		///<summary>
		/////// Computes a score factor based on a term's document frequency (the number
		/// of documents which contain the term).  This value is multiplied by the
		/// <see cref="Tf(int)"></see> factor for each term in the query and these products are
		/// then summed to form the initial score for a document.
		///
		/// <p>Terms that occur in fewer documents are better indicators of topic, so
		/// implementations of this method usually return larger values for rare terms,
		/// and smaller values for common terms.
		///</p>
		/// <param name="docFreq">the number of documents which contain the term</param>
		/// <param name="numDocs">the total number of documents in the collection</param>
		/// <returns>a score factor based on the term's document frequency</returns>
		/// </summary>
		////
		float Idf(int docFreq, int numDocs);

		///<summary>
		/////// Computes a score factor based on the fraction of all query terms that a
		/// document contains.  This value is multiplied into scores.
		///
		/// <p>The presence of a large portion of the query terms indicates a better
		/// match with the query, so implementations of this method usually return
		/// larger values when the ratio between these parameters is large and smaller
		/// values when the ratio between them is small.
		///</p>
		/// <param name="overlap">the number of query terms matched in the document</param>
		/// <param name="maxOverlap">the total number of terms in the query</param>
		/// <returns>a score factor based on term overlap with the query</returns>
		/// </summary>
		float Coord(int overlap, int maxOverlap);

		///<summary>
		/// Calculate a scoring factor based on the data in the payload.  Overriding implementations
		/// are responsible for interpreting what is in the payload.  Lucene makes no assumptions about
		/// what is in the byte array.
		/// <p>
		/// The default implementation returns 1.
		///</p>
		/// </summary>
		/// <param name="docId">The docId currently being scored.  If this value is <see cref="NO_DOC_ID_PROVIDED"></see>, then it should be assumed that the PayloadQuery implementation does not provide document information</param>
		/// <param name="fieldName">The fieldName of the term this payload belongs to</param>
		/// <param name="start">The start position of the payload</param>
		/// <param name="end">The end position of the payload</param>
		/// <param name="payload">The payload byte array to be scored</param>
		/// <param name="offset">The offset into the payload array</param>
		/// <param name="length"> The length in the array</param>
		/// <returns>An implementation dependent float to be used as a scoring factor</returns>
		float ScorePayload(int docId, string fieldName, int start, int end, byte[] payload, int offset, int length);

	}
}