doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2401.09350
212
In the second case, suppose δ(µ, u) < δ(µ, v), so that v is on the surface of B and u is in its interior. Consider the function f (ω) = δ(v, ω) − δ(u, ω). Clearly, f (v) < 0 and f (µ) > 0. Therefore, there must be a point w ∈ B on the line segment µ + λ(v − µ) for λ ∈ [0, 1] for which f (w) = 0. That implies that δ(w, u) = δ(w, v). Furthermore, v is the closest point on the surface of B to w, so that the ball centered at w with radius δ(w, v) is entirely contained in B. This is illustrated in Figure 6.3. Importantly, no other point in X is closer to w than u and v. So w rests ⊓⊔ Proof of Theorem 6.1. We prove the result for the case where δ is the Eu- clidean distance and leave the proof of the more general case as an exercise. (Hint: To prove the general case you should make the line segment argument as in the proof of Lemma 6.1.)
2401.09350#212
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
213
Suppose the greedy search for q stops at some local optimum u that is different from the global optimum, u∗, and that (u, u∗) /∈ E—otherwise, the algorithm must terminate at u∗ instead. Let r = δ(q, u). By assumption we have that the ball centered at q with radius r, B(q, r), is non-empty because it must contain u∗ whose distance to q is less than r. Let v be the point in this ball that is closest to u. Consider now the ball B((u + v)/2, δ(u, v)/2). This ball is empty: otherwise v would not be the closest point to u. By Lemma 6.1, we must have that (u, v) ∈ E. This is a ⊓⊔ contradiction because the greedy search cannot stop at u. 79 80 6 Graph Algorithms Notice that Theorem 6.1 holds for any graph that contains the De- launay graph. The next theorem strengthens this result to show that the Delaunay graph represents the minimal edge set that guarantees an optimal solution through greedy traversal. Theorem 6.2 The Delaunay graph is the minimal graph over which the best- first-search algorithm gives the optimal solution to the top-1 retrieval problem.
2401.09350#213
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
214
Theorem 6.2 The Delaunay graph is the minimal graph over which the best- first-search algorithm gives the optimal solution to the top-1 retrieval problem. In other words, if a graph does not contain the Delaunay graph, then we can find queries for which the greedy traversal from an entry point does not produce the optimal top-1 solution. Proof of Theorem 6.2. Suppose that the data points X are in general posi- tion. Suppose further that G = (V, E) is a graph built from X , and that u and v are two nodes in the graph. Suppose further that (v, u) /∈ E but that that edge exists in the Delaunay graph of X . If we could sample a query point q such that δ(q, u) < δ(q, v) but δ(q, w) > max(δ(q, u), δ(q, v)) for all w ̸= u, v, then we are done. That is because, if we entered the graph through v, then v is a local optimum in its neighborhood: all other points that are connected to v have a distance larger than δ(q, v). But v is not the globally optimal solution, so that the greedy traversal does not converge to the optimal solution.
2401.09350#214
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
215
It remains to show that such a point q always exists. Suppose it did not. That is, for any point that is in the Voronoi region of u, there is a data point w ̸= v that is closer to it than v. If that were the case, then no ball whose boundary passes through u and v can be empty, which contradicts Lemma 6.1 ⊓⊔ (the “empty-circle” property of the Delaunay graph). As a final remark on the Delaunay graph and its use in top-1 retrieval, we note that the Delaunay graph only makes sense if we have precise knowledge of the structure of the space (i.e., the metric). It is not enough to have just pairwise distances between points in a collection X . In fact, Navarro [2002] showed that if pairwise distances are all we know about a collection of points, then the only sensible graph that contains the Delaunay graph and is amenable to greedy search is the complete graph. This is stated as the following theorem.
2401.09350#215
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
216
Theorem 6.3 Suppose the structure of the metric space is unknown, but we have pairwise distances between the points in a collection X , due to an arbitrary, but proper distance function δ. For every choice of u, v ∈ X , there is a choice of the metric space such that (u, v) ∈ E, where G = (V, E) is a Delaunay graph for X . Proof. The idea behind the proof is to assume (u, v) /∈ E, then construct a query point that necessitates the existence of an edge between u and v. To 6.2 The Delaunay Graph that end, consider a query point q such that its distance to u is C + ϵ for some constant C and ϵ > 0, its distance to v is C, and its distance to every other point in X is C + 2ϵ. This is a valid arrangement if we choose ϵ such that ϵ ≤ 1/2 minx,y∈X δ(x, y) and C such that C ≥ 1/2 maxx,y∈X δ(x, y). It is easy to verify that, if those conditions hold, a point q with the prescribed distances can exist as the distances do not violate any of the triangle inequalities.
2401.09350#216
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
217
Consider then a search starting from node u. If (u, v) /∈ E, then for the search algorithm to walk from u to the optimal solution, v, it must first get farther from q. But we know by the properties of the Delaunay graph that such an event implies that u (which would be the local optimum) must be the global optimum. That is clearly not true. So we must have that (u, v) ∈ E, ⊓⊔ giving the claim. # 6.2.4 Top-k Retrieval Let us now consider the general case of top-k retrieval over the Delaunay graph. The following result states that Algorithm 3 is correct if executed on any graph that contains the Delaunay graph, in the sense that it returns the optimal solution to top-k retrieval. Theorem 6.4 Let G = (V, E) be a graph that contains the Delaunay graph of m vectors X ⊂ Rd. Algorithm 3 over G gives the optimal solution to the top-k retrieval problem for any arbitrary query q if δ(·, ·) is proper. Proof. As with the proof of Theorem 6.1, we show the result for the case where δ is the Euclidean distance and leave the proof of the more general case as an exercise.
2401.09350#217
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
218
The proof is similar to the proof of Theorem 6.1 but the argument needs a little more care when k > 1. Suppose Algorithm 3 for q stops at some local optimum set Q that is different from the global optimum, Q∗. In other words, Q △ Q∗ ̸= ∅ where △ denotes the symmetric difference between sets. Let r = maxu∈Q δ(q, u) and consider the ball B(q, r). Because Q △ Q∗ ̸= ∅, there must be at least k points in the interior of this ball. Let v /∈ Q be a point in the interior and suppose u ∈ Q is its closest point in the ball. Clearly, the ball B((u + v)/2, δ(u, v)/2) is empty: otherwise v would not be the closest point to u. By Lemma 6.1, we must have that (u, v) ∈ E. This is a contradiction because Algorithm 3 would, before termination, place v in Q ⊓⊔ to replace the node that is on the surface of the ball. 81 82 6 Graph Algorithms
2401.09350#218
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
220
Fig. 6.4: Comparison of the Delaunay graph (a) with the k-NN graph for k = 2 (b) for an example collection in R2. In the illustration of the directed k-NN graph, edges that go in both directions are rendered as lines without arrow heads. Notice that, the top left node cannot be reached from the rest of the graph. # 6.2.5 The k-NN Graph From our discussion of Voronoi diagrams and Delaunay graphs, it appears as though we have found the graph we have been looking for. Indeed, the Delaunay graph of a collection of vectors gives us the exact solution to top- k queries, using such a strikingly simple search algorithm. Sadly, the story does not end there and, as usual, the relentless curse of dimensionality poses a serious challenge. The first major obstacle in high dimensions actually concerns the construc- tion of the Delaunay graph itself. While there are many algorithms [Edels- brunner and Shah, 1992, Guibas et al., 1992, Guibas and Stolfi, 1985] that can be used to construct the Delaunay graph—or, to be more precise, to perform Delaunay triangulation—all suffer from an exponential dependence on the number of dimensions d. So building the graph itself seems infeasible when d is too large.
2401.09350#220
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
221
Even if we were able to quickly construct the Delaunay graph for a large collection of points, we would face a second debilitating issue: The graph is close to complete! While exact bounds on the expected number of edges in the graph surely depend on the data distribution, in high dimensions the graph becomes necessarily more dense. Consider, for example, vectors that are independent and identically-distributed in each dimension. Recall from our discussion from Chapter 2, that in such an arrangement of points, the distance between any pair of points tends to concentrate sharply. As a result, the Delaunay graph has an edge between almost every pair of nodes. 6.2 The Delaunay Graph These two problems are rather serious, rendering the guarantees of the De- launay graph for top-k retrieval mainly of theoretical interest. These same dif- ficulties motivated research to approximate the Delaunay graph. One promi- nent method is known as the k-NN graph [Ch´avez and Tellez, 2010, Hajebi et al., 2011, Fu and Cai, 2016].
2401.09350#221
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
222
The k-NN graph is simply a k-regular graph where every node (i.e., vector) is connected to its k closest nodes. So (u, v) ∈ E if v ∈ arg min(k) w∈X δ(u, w). Note that, the resulting graph may be directed, depending on the choice of δ. We should mention, however, that researchers have explored ways of turning the k-NN graph into an undirected graph [Ch´avez and Tellez, 2010]. An example is depicted in Figure 6.4. We must remark on two important properties of the k-NN graph. First, the graph itself is far more efficient to construct than the Delaunay graph [Chen et al., 2009, Vaidya, 1989, Connor and Kumar, 2010, Dong et al., 2011]. The second point concerns the connectivity of the graph. As Brito et al. [1997] show, under mild conditions governing the distribution of the vectors and with k large enough, the resulting graph has a high probability of being connected. When k is too small, on the other hand, the resulting graph may become too sparse, leading the greedy search algorithm to get stuck in local minima.
2401.09350#222
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
223
Finally, at the risk of stating the obvious, the k-NN graph does not enjoy any of the guarantees of the Delaunay graph in the context of top-k retrieval. That is simply because the k-NN graph is likely only a sub-graph of the Delaunay graph, while Theorems 6.1 and 6.4 are provable only for super- graphs of the Delaunay graph. Despite these deficiencies, the k-NN graph remains an important component of advanced graph-based, approximate top- k retrieval algorithms. # 6.2.6 The Case of Inner Product Everything we have stated so far about the Voronoi diagrams and its duality with the Delaunay graph was contingent on δ(·, ·) being proper. In particular, the proof of the optimality guarantees implicitly require non-negativity and the triangle inequality. As a result, none of the results apply to MIPS prima facie. As it turns out, however, we could extend the definition of Voronoi regions and the Delaunay graph to inner product, and present guarantees for MIPS (with k = 1, but not with k > 1). That is the proposal by Morozov and Babenko [2018]. 83 84 6 Graph Algorithms
2401.09350#223
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
224
# (a) Euclidean Delaunay # (b) Inner Product Voronoi # (c) IP-Delaunay Fig. 6.5: Comparison of the Voronoi diagrams and Delaunay graphs for the same set of points but according to Euclidean distance versus inner prod- uct. Note that, for the non-metric distance function based on inner prod- uct, the Voronoi regions are convex cones determined by the intersection of half-spaces passing through the origin. Observe additionally that the inner product-induced Voronoi region of a point (those in white) may be an empty set. Such points can never be the solution to the 1-MIPS problem. # 6.2.6.1 The IP-Delaunay Graph Let us begin by characterizing the Voronoi regions for inner product. The Voronoi region Ru of a vector u ∈ X comprises of the set of points for which u is the maximizer of inner product: Ru = {x ∈ Rd | u = arg max ⟨x, v⟩}. v∈X This definition is essentially the same as how we defined the Voronoi region for a proper δ, and, indeed, the resulting Voronoi diagram is a partitioning of the whole space. The properties of the resulting Voronoi regions, however, could not be more different.
2401.09350#224
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
225
First, recall from Section 1.3.3 that inner product does not even enjoy what we called coincidence. That is, in general, u = arg maxv∈X ⟨u, v⟩ is not guaranteed. So it is very much possible that Ru is empty for some u ∈ X . Second, when Ru ̸= ∅, it is a convex cone that is the intersection of half-spaces that pass through the origin. So the Voronoi regions have a substantially different geometry. Figure 6.5(b) visualizes this phenomenon. Moving on to the Delaunay graph, Morozov and Babenko [2018] construct the graph in much the same way as before and call the resulting graph the IP- Delaunay graph. Two nodes u, v ∈ V in the IP-Delaunay graph are connected if their Voronoi regions intersect: Ru ∩ Rv ̸= ∅. Note that, by the reasoning above, the nodes whose Voronoi regions are empty will be isolated in the 6.2 The Delaunay Graph graph. These nodes represent vectors that can never be the solution to MIPS for any query—remember that we are only considering k = 1. So it would be inconsequential if we removed these nodes from the graph. This is also illustrated in Figure 6.5(c).
2401.09350#225
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
226
Considering the data structure above for inner product, Morozov and Babenko [2018] prove the following result to give optimality guarantee for the greedy search algorithm for 1-MIPS (granted we enter the graph from a non-isolated node). Nothing, however, may be said about k-MIPS. Theorem 6.5 Suppose G = (V, E) is a graph that contains the IP-Delaunay graph for a collection X minus the isolated nodes. Invoking Algorithm 3 with k = 1 and δ(·, ·) = −⟨·, ·⟩ gives the optimal solution to the top-1 MIPS problem.
2401.09350#226
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
227
Proof. If we showed that a local optimum is necessarily the global optimum, then we are done. To that end, consider a query q for which Algorithm 3 terminates when it reaches node u ∈ X which is distinct from the globally optimal solution u∗ /∈ N (u). In other words, we have that ⟨q, u⟩ > ⟨q, v⟩ for all v ∈ N (u), but ⟨q, u∗⟩ > ⟨q, u⟩ and (u, u∗) /∈ E. If that is true, then q /∈ Ru, the Voronoi region of u, but instead we must have that q ∈ Ru∗ .
2401.09350#227
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
228
Now define the collection ¥ & N(u) U {u}, and consider the Voronoi diagram of the resulting collection. It is easy to show that the Voronoi region of u in the presence of points in ¥ is the same as its region given V. From before, we also know that q ¢ R,. Considering the fact that R¢ = Uvex Rv: q must belong to R, for some v € X with v #u. That implies that (g,v) > (q,u) for some v € ¥ \u. But because v € N(u) (by construction), the last inequality poses a contradiction to our premise that u was locally optimal. O In addition to the fact that the IP-Delaunay graph does not answer top-k queries, it also suffers from the same deficiencies we noted for the Euclidean Delaunay graph earlier in this section. Naturally then, to make the data struc- ture more practical in high-dimensional regimes, we must resort to heuristics and approximations, which in their simplest form may be the k-MIPS graph (i.e., a k-NN graph where the distance function for finding the top-k nodes is inner product). This is the general direction Morozov and Babenko [2018] and a few other works have explored [Liu et al., 2019, Zhou et al., 2019].
2401.09350#228
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
229
As in the case of metric distance functions, none of the guarantees stated above port over to these approximate graphs. But, once again, empirical evidence gathered from a variety of datasets show that these graphs perform reasonably well in practice, even for top-k with k > 1. # 6.2.6.2 Is the IP-Delaunay Graph Necessary? Morozov and Babenko [2018] justify the need for developing the IP-Delaunay graph by comparing its structure with the following alternative: First, apply 85 86 6 Graph Algorithms a MIPS-to-NN asymmetric transformation [Bachrach et al., 2014] from R4 to R¢+1. This involves transforming a data point u with da(u) = [u; V/1 — |[ull3] and a query point q with ¢,(v) = [v;0]. Next, construct the standard (Eu- clidean) Delaunay graph over the transformed vectors. What happens if we form the Delaunay graph on the transformed collection oa(X)? Observe the Euclidean distance between ¢a(u) and ¢q(v) for two vectors u,u € ¥:
2401.09350#229
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
230
IIda(u) — bar) [I3 = ea(u)|l3 + [Ibalw)|3 — 2a), balv)) (\lull +1 — lula) + (loll + 1 — [|l3) — (u,v) — 2y/ (1 = |lel|3) (1 = IIe 113). Should we use these distances to construct the Delaunay graph, the resulting structure will have nothing to do with the original MIPS problem. That is because the L2 distance between a pair of transformed data points is not rank-equivalent to the inner product between the original data points. For this reason, Morozov and Babenko [2018] argue that the IP-Delaunay graph is a more sensible choice. However, we note that their argument rests heavily on their particular choice of MIPS-to-NN transformation. The transformation they chose makes sense in contexts where we only care about preserving the inner product between query-data point pairs. But when forming the Delaunay graph, pre- serving inner product between pairs of data points, too, is imperative. That is the reason why we lose rank-equivalence between L2 in Rd+1 and inner product in Rd.
2401.09350#230
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
231
There are, in fact, MIPS-to-NN transformations that are more appropriate for this problem and would invalidate the argument for the need for the IP- Delaunay graph. Consider for example ¢q : R¢ + R¢+™ for a collection X of m vectors as follows: ¢a(u\) = u © (\/1 = lu |[3)ecasi), where ul is the i-th data point in the collection, and e; is the j-th standard basis vector. In other words, the i-th d-dimensional data point is augmented with an m-dimensional sparse vector whose i-th coordinate is non-zero. The query transformation is simply $,(q) = q@0, where 0 € R™ is a vector of m zeros. Despite the dependence on m, the transformation is remarkably easy to manage: the sparse subspace of every vector has at most one non-zero coor- dinate, making the doubling dimension of the sparse subspace O(log m) by Lemma 3.5. Distance computation between the transformed vectors, too, has negligible overhead. Crucially, we regain rank-equivalence between L2 dis- tance in R¢+™ and inner product in R@ not only for query-data point pairs, but also for pairs of data points:
2401.09350#231
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
232
∥ϕd(u) − ϕd(v)∥2 2 = ∥ϕd(u)∥2 2 + ∥ϕd(v)∥2 2 − 2⟨ϕd(u), ϕd(v)⟩ = 2 − 2⟨u, v⟩. 6.3 The Small World Phenomenon Finally, unlike the IP-Delaunay graph, the standard Delaunay graph in Rd+m over the transformed vector collection has optimality guarantee for the top-k retrieval problem per Theorem 6.4. It is, as such, unclear if the IP-Delaunay graph is even necessary as a theoretical tool. In other words, suppose we are given a collection of points X and inner product as the similarity function. Consider a graph index where the presence of an edge is decided based on the inner product between data points. Take another graph index built for the transformed X using the transformation described above from Rd to Rd+m, and where the edge set is formed on the basis of the Euclidean distance between two (transformed) data points. The two graphs are equivalent.
2401.09350#232
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
233
The larger point is that, MIPS over m points in Rd is equivalent to NN over a transformation of the points in Rd+m. While the transforma- tion increases the apparent dimensionality, the intrinsic dimensionality of the data only increases by O(log m). # 6.3 The Small World Phenomenon Consider, once again, the Delaunay graph but, for the moment, set aside the fact that it is a prohibitively-expensive data structure to maintain for high dimensional vectors. By construction, every node in the graph is only connected to its Voronoi neighbors (i.e., nodes whose Voronoi region intersects with the current node’s). We showed that such a topology affords navigability, in the sense that the greedy procedure in Algorithm 3 can traverse the graph only based on information about immediate neighbors of a node and yet arrive at the globally optimal solution to the top-k retrieval problem. Let us take a closer look at the traversal algorithm for the case of k = 1. It is clear that, navigating from the entry node to the solution takes us through every Voronoi region along the path. That is, we cannot “skip” a Voronoi region that lies between the entry node and the answer. This implies that the running time of Algorithm 3 is directly affected by the diameter of the graph (in addition to the average degree of nodes).
2401.09350#233
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
234
Can we enhance this topology by adding long-range edges between non- Voronoi neighbors, so that we may skip over a fraction of Voronoi regions? After all, Theorem 6.4 guarantees navigability so long as the graph contains the Delaunay graph. Starting with the Delaunay graph and inserting long- range edges, then, will not take away any of the guarantees. But, what is the 87 88 6 Graph Algorithms (a) (b) © © e®@e 0 © ©, ©),@ © 0 * eo” 6SeGe e@ oe o) oe ee 0 eee © e© eee e 9—9—9—9—-9—9 @—e—o—o-.e—0d 1 Le I @—o—e;=9—9—6 o—o—@;-9—9—9 acs ae o—e—o—o—o—o Fig. 6.6: Example graphs generated by the probabilistic model introduced by Kleinberg [2000]. (a) illustrates the directed edges from u for the following configuration: r = 2, l = 0. (b) renders the regular structure for r = 1, where edges without arrows are bi-directional, and the long-range edges for node u with configuration l = 2.
2401.09350#234
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
235
right number of long-range edges and how do we determine which remote nodes should be connected? This section reviews the theoretical results that help answer these questions. # 6.3.1 Lattice Networks Let us begin with a simple topology that is relatively easy to reason about— we will see later how the results from this section can be generalized to the Delaunay graph. The graph we have in mind is a lattice network where m × m nodes are laid on a two-dimensional grid. Define the distance between two nodes as their lattice (Manhattan) distance (i.e., the minimal number of horizontal and vertical hops that connect two nodes). That is the network examined by Kleinberg [2000] in a seminal paper that studied the effect of long-range edges on the time complexity of Algorithm 3. We should take a brief detour and note that, Kleinberg [2000], in fact, studied the problem of transmitting a message from a source to a known tar- get using the best-first-search algorithm, and quantified the average number of hops required to do that in the presence of a variety of classes of long- range edges. That, in turn, was inspired by a social phenomenon colloquially known as the “small-world phenomenon”: The empirical observation that two strangers are linked by a short chain of acquaintances [Milgram, 1967, Jeffrey Travers, 1969].
2401.09350#235
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
236
In particular, Kleinberg [2000] was interested in explaining why and un- der what types of long-range edges should our greedy algorithm be able to navigate to the optimal solution, by only utilizing information about im- mediate neighbors. To investigate this question, Kleinberg [2000] introduced 6.3 The Small World Phenomenon the following probabilistic model of the lattice topology as an abstraction of individuals and their social connections. # 6.3.1.1 The Probabilistic Model Every node in the graph has a (directed) edge with every other node within lattice distance r, for some fixed hyperparameter r ≥ 1. These connections make up the regular structure of the graph. Overlaid with this structure is a set of random, long-range edges that are generated according to the following probabilistic model. For fixed constants l ≥ 0 and α ≥ 0, we insert a directed edge between every node u and l other nodes, where a node v ̸= u is selected with probability proportional to δ(u, v)−α where δ(u, v) = ∥u − v∥1 is the lattice distance. Example graphs generated by this process are depicted in Figure 6.6.
2401.09350#236
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
237
The model above is reasonably powerful as it can express a variety of topologies. For example, when l = 0, the resulting graph has no long-range edges. When l > 0 and α = 0, then every node v ̸= u in the graph has an equal chance of being the destination of a long-range edge from u. When α is large, the protocol becomes more biased to forming a long-range connection from u to nodes closer to it. # 6.3.1.2 The Claim Kleinberg [2000] shows that, when 0 < a < 2, the best-first-search algorithm must visit at least O,,;,.(m@-/9) nodes. When a > 2, the number of nodes visited is at least O,.1,4(m‘°—?)/(¢-))) instead. But, rather uniquely, when a = 2 and r =/=1, we visit a number of nodes that is at most poly-logarithmic in m. Theorem 6.6 states this result formally. But before we present the theorem, we state a useful lemma.
2401.09350#237
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
238
Theorem 6.6 states this result formally. But before we present the theorem, we state a useful lemma. Lemma 6.2 Generate a lattice G = (V, E) of m × m nodes using the proba- bilistic model above with α = 2 and l = 1. The probability that there exists a long-range edge between two nodes u, v ∈ V is at least δ(u, v)−2/4 ln(6m). Proof. u chooses v # u as its long-range destination with the following prob- ability: 6(u,v)~?/ Cwdu 5(u,w)~?. Let us first bound the denominator as follows: 89 90 6 Graph Algorithms In the expression above, we derived the first inequality by iterating over all possible (lattice) distances between m2 nodes on a two-dimensional grid (ranging from 1 to 2m − 2 if u and w are at diagonally opposite corners), and noticing that there are at most 4i nodes at distance i from node u. From this we infer that the probability that node (u, v) ∈ E is at least δ(u, v)−2/4 ln(6m). ⊓⊔
2401.09350#238
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
239
Theorem 6.6 Generate a lattice G = bilistic model above with a = 2 andr = beginning from any arbitrary node and O(log? m) nodes on average. (V,€) of mxm nodes using the proba- l=1. The best-first-search algorithm ending in a target node visits at most Proof. Define a sequence of sets Ai, where each Ai consists of nodes whose distance to the target u∗ is greater than 2i and at most 2i+1. Formally, Ai = {v ∈ V | 2i < δ(u∗, v) ≤ 2i+1}. Suppose that the algorithm is currently in node u and that log m ≤ δ(u, u∗) < m, so that u ∈ Ai for some log log m ≤ i < log m. What is the probability that the algorithm exits the set Ai in the next step? That happens when one of u’s neighbors has a distance to u∗ that is at most j=0 Aj. 2i. In other words, u must have a neighbor that is in the set A<i = ∪j=i−1 The number of nodes in A<i is at least: 2 2 1+) os=1+ s=1 "(2' +1) Ss Qi, 2
2401.09350#239
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
240
2 2 1+) os=1+ s=1 "(2' +1) Ss Qi, 2 How likely is it that (u, v) ∈ E if v ∈ A<i? We apply Lemma 6.2, noting that the distance of each of the nodes in A<i with u is at most 2i+1+2i < 2i+2. We obtain that, the probability that u is connected to a node in A<i is at least 22i−1(2i+2)−2/4 ln(6m) = 1/128 ln(6m). Next, consider the total number of nodes in Ai that are visited by the algorithm, and denote it by Xi. In expectation, we have the following: E[X\] j=l j=l SPM 2 <d (1- 1 — 128 In(6m). 128In(6m) 8In(6m) We obtain the same bound if we repeat the arguments for i = log m. When 0 ≤ i < log log m, the algorithm visits at most log m nodes, so that the bound above is trivially true. Denoting by X the total number o . wos log f nodes visited, X = og in Li=0 j=0 Xj, we conclude that: 6.3 The Small World Phenomenon
2401.09350#240
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
241
j=0 Xj, we conclude that: 6.3 The Small World Phenomenon E[X] < (1 + logm)(128In(6m)) = O(log? m), thereby completing the proof. The argument made by Kleinberg [2000] is that, in a lattice network where each node is connected to its (at most four) nearest neighbors within unit distance, and where every node has a long-range edge to one other node with probability that is proportional to 1/δ(·, ·)2, then the greedy algorithm visits at most a poly-logarithmic number of nodes. Translating this result to the case of top-1 retrieval using Algorithm 3 over the same network, we can state that the time complexity of the algorithm is O(log2 m), because the total number of neighbors per node is O(1). While this result is significant, it only holds for the lattice network with the lattice distance. It has thus no bearing on the time complexity of top-1 retrieval over the Delaunay graph with the Euclidean distance. In the next section, we will see how Beaumont et al. [2007a] close this gap. # 6.3.2 Extension to the Delaunay Graph
2401.09350#241
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
242
# 6.3.2 Extension to the Delaunay Graph We saw in the preceding section that, the secret to creating a provably nav- igable graph where the best-first-search algorithm visits a poly-logarithmic number of nodes in the lattice network, was the highly specific distribution from which long-range edges were sampled. That element turns out to be the key ingredient when extending the results to the Delaunay graph too, as Beaumont et al. [2007a] argue. We will describe the algorithm for data in the two-dimensional unit square. That is, we assume that the collection of data points X and query points are in [0, 1]2. That the vectors are bounded is not a limitation per se—as we discussed previously, we can always normalize vectors into the hypercube without loss of generality. That the algorithm does not naturally extend to high dimensions is a serious limitation, but then again, that is not surprising considering the Delaunay graph is expensive to construct. However, in the next section, we will review heuristics that take the idea to higher dimensions. # 6.3.2.1 The Probabilistic Model
2401.09350#242
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
243
# 6.3.2.1 The Probabilistic Model Much like the lattice network, we assume there is a base graph and a num- ber of randomly generated long-range edges between nodes. For the base graph, Beaumont et al. [2007a] take the Delaunay graph.1 As for the long- range edges, each node has a directed edge to one other node that is selected 1 Beaumont et al. [2007a] additionally connect all nodes that are within δMin ∝ 1/m distance from each other, where δMin is chosen such that the expected number of uniformly91 ⊓⊔ 92 6 Graph Algorithms
2401.09350#243
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
245
We already know from Theorem 6.4 that, because the network above con- tains the Delaunay graph, it is navigable by Algorithm 3. What remains to be investigated is what type of long-range edges could reduce the number of hops (i.e., the number of nodes the algorithm must visit as it navigates from an entry node to a target node). Because at each hop the algorithm needs to evaluate distances with O(1) neighbors, improving the number of steps directly improves the time complexity of Algorithm 3 (for the case of k = 1). Beaumont et al. [2007a] show that, if long-range edges are chosen according to the following protocol, then the number of hops is poly-logarithmic in m. The protocol is simple: For a node u in the graph, first sample α uniformly from [ln δ∗, ln δ∗], where δ∗ = minv,w∈X δ(v, w) and δ∗ = maxv,w∈X δ(v, w). Then choose θ ∼ [0, 2π], to finally obtain u′ = u + z where z is the vector [eα cos θ, eα sin θ]. Let us
2401.09350#245
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
247
# 6.3.2.2 The Claim Given the resulting graph, Beaumont et al. [2007a] state and prove that the average number of hops taken by the best-first-search algorithm is poly- logarithmic. Before we discuss the claim, however, let us state a useful lemma. Lemma 6.3 The probability that the long-range end-point from a node u lands in a ball centered at another node v with radius βδ(u, v) for some small distributed points in a ball of radius δMin is 1. We reformulate their method without δMin in the present monograph to simplify their result. 6.3 The Small World Phenomenon β ∈ [0, 1] is at least Kβ2/(1 + β)2 where K = (2 ln ∆)−1 and ∆ = δ∗/δ∗ is the aspect ratio. Proof. The probability that the long-range end-point lands in an area dS that covers the distance [r, r + dr] and angle [θ, θ + dθ], for small dr and dθ, is:
2401.09350#247
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
248
dθ 2π ln(r + dr) − ln r ln δ∗ − ln δ∗ ≈ dθ 2π dr/r ln ∆ = 1 2π ln ∆ rdθdr r2 ≈ dS 2π ln ∆r2 . Observe now that the distance between a point u and any point in the ball described in the lemma is at most (1 + β)δ(u, v). We can therefore see that the probability that the long-range end-point lands in B(v, βδ(u, v)) is at least: πβ2δ(u, v)2 2π ln ∆(1 + β)2δ(u, v)2 = β2 2 ln(∆)(1 + β)2 , as required. Theorem 6.7 Generate a graph G = (V, E) according to the probabilistic model described above, for vectors in [0, 1]2 equipped with the Euclidean dis- tance δ(·, ·). The number of nodes visited by the best-first-search algorithm starting from any arbitrary node and ending at a target node is O(log2 ∆).
2401.09350#248
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
249
Proof. The proof follows the same reasoning as in the proof of Theorem 6.6. Suppose we are currently at node u and that u∗ is our target node. By Lemma 6.3, the probability that the long-range end-point of u lands in B(u∗, δ(u, u∗)/6) is at least 1/98 ln ∆. As such, the total number of hops, X, from u to a point in B(u∗, δ(u, u∗)/6) has the following expectation: E[X] = )OPIX >i <> (1- shy) 1 = 98InA. Every time the algorithm moves from the current node u to some other node in B(u∗, δ(u, u∗)/6), the distance is shrunk by a factor of 6/5. As such, the total number of hops in expectation is at most: (ines 4) x (98 In 4) = O(log? A). We highlight that, Beaumont et al. [2007a] choose the interval from which α is sampled differently. Indeed, α in their work is chosen uniformly from the 2. Substituting that configuration into Theorem 6.7 range δMin ∝ 1/m and gives an expected number of hops that is O(log2 m). 93
2401.09350#249
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
250
93 ⊓⊔ # ⊓⊔ 94 6 Graph Algorithms # 6.3.3 Approximation The results of Beaumont et al. [2007a] are encouraging. In theory, so long as we can construct the Delaunay graph, we not only have the optimality guarantee, but we are also guaranteed to have a poly-logarithmic number of hops to reach the optimal answer. Alas, as we have discussed previously, the Delaunay graph is expensive to build in high dimensions. Moreover, the number of neighbors per node is no longer O(1). So even if we inserted long-range edges into the Delaunay graph, it is not imme- diate if the time saved by skipping Voronoi regions due to long-range edges offsets the additional time the algorithm spends computing dis- tances between each node along the path and its neighbors.
2401.09350#250
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
251
We are back, then, to approximation with the help of heuristics. Beau- mont et al. [2007b] describe one such method in a follow-up study. Their method approximates the Voronoi regions of every node by resorting to a gossip protocol. In this procedure, every node has a list of 3d + 1 of its cur- rent neighbors, where d denotes the dimension of the space. In every iteration of the algorithm, every node passes its current list to its neighbors. When a node receives this information, it takes the union of all lists, and finds the subset of 3d + 1 points with the minimal volume. This subset becomes the node’s current list of neighbors. While a na¨ıve implementation of the protocol is prohibitively expensive, Beaumont et al. [2007b] discuss an alternative to estimating the volume induced by a set of 3d + 1 points, and the search for the minimal volume.
2401.09350#251
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
252
Malkov et al. [2014] take a different approach. They simply permute the vectors in the collection X , and sequentially add each vector to the graph. Every time a vector is inserted into the graph, it is linked to its k nearest neighbors from the current snapshot of the graph. The intuition is that, as the graph grows, the edges added earlier in the evolution of the graph serve as long-range edges in the final graph, and the more recent edges form an approximation of the k-NN graph, which itself is an approximation of the Delaunay graph. Later Malkov and Yashunin [2020] modify the algorithm by introducing a hierarchy of graphs. The resulting graph has proved successful in practice and, despite its lack of theoretical guarantees, is both effective and highly efficient. # 6.4 Neighborhood Graphs In the preceding section, our starting point was the Delaunay graph. We aug- mented it with random long-range connections to improve the transmission 6.4 Neighborhood Graphs
2401.09350#252
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
253
In the preceding section, our starting point was the Delaunay graph. We aug- mented it with random long-range connections to improve the transmission 6.4 Neighborhood Graphs rate through the network. Because the resulting structure contains the Delau- nay graph, we get the optimality guarantee of Theorem 6.4 for free. But, as a result of the complexity of the Delaunay construction in high dimensions, we had to approximate the structure instead, losing all guarantees in the process. Frustratingly, the approximate structure obtained by the heuristics we discussed, is certainly not a super-graph of the Delaunay graph, nor is it necessarily its sub-graph. In fact, even the fundamental property of connect- edness is not immediately guaranteed. There is therefore nothing meaningful to say about the theoretical behavior of such graphs.
2401.09350#253
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
254
In this section, we do the opposite. Instead of adding edges to the Delaunay graph and then resorting to heuristics to create a completely different graph, we prune the edges of the Delaunay graph to find a structure that is its sub- graph. Indeed, we cannot say anything meaningful about the optimality of exact top-k retrieval, but as we will later see, we can state formal results for the approximate top-k retrieval variant—albeit in a very specific case. The structure we have in mind is known as the Relative Neighborhood Graph (RNG) [Toussaint, 1980, Jaromczyk and Toussaint, 1992]. In an RNG, G = (V,€), for a distance function 6(-,-), there is an undirected edge between two nodes u,v € V if and only if d(u,v) < max (5(u, w),d(w,v)) for all w € V \ {u,v}. That is, the graph guar- antees that, if (u,v) € €, then there is no other point in the collection that is simultaneously closer to u and v, than u and v are to each other. Conceptually, then, we can view constructing an RNG as pruning away edges in the Delaunay graph that violate the RNG property.
2401.09350#254
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
255
The RNG was shown to contain the Minimum Spanning Tree [Toussaint, 1980], so that it is guaranteed to be connected. It is also provably contained in the Delaunay graph [O’Rourke, 1982] in any metric space and in any number of dimensions. As a final property, it is not hard to see that such a graph G comes with a weak optimality guarantee for the best-first-search algorithm: If q = u∗ ∈ X , then the greedy traversal algorithm returns the node associated with q, no matter where it enters the graph. That is due simply to the following fact: If the current node u is a local optimum but not the global optimum, then there must be an edge connecting u to a node that is closer to u∗. Otherwise, u itself must be connected to u∗.
2401.09350#255
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
256
Later, Arya and Mount [1993] proposed a directed variant of the RNG, which they call the Sparse Neighborhood Graph (SNG) that is arguably more suitable for top-k retrieval. For every node u ∈ V, we apply the following procedure: Let U = V \ {u}. Sort the nodes in U in increasing distance to u. Then, remove the closest node (say, v) from U and add an edge between u to v. Finally, remove from U all nodes w that satisfy δ(u, w) > δ(w, v). The process is repeated until U is empty. It can be immediately seen that the weak optimality guarantee from before still holds in the SNG. 95 96 6 Graph Algorithms 1.0 : 0.8 0.6 0.4 0.2 0.0 U0 0.35 0.50 0.75 1.00 1.0 0.8 0.6 0.4 0.2 0.0 U0 0.35 0.50 0.75 1.00 (a) α = 1 (b) α = 1.1 (c) α = 1.2 (d) α = 1.3
2401.09350#256
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
257
Fig. 6.8: Examples of α-SNGs on a dataset of 20 points drawn uniformly from [0, 1]2 (blue circles). When α = 1, we recover the standard SNG. As α becomes larger, the resulting graph becomes more dense. Neighborhood graphs are the backbone of many graph algorithms for top- k retrieval [Malkov et al., 2014, Malkov and Yashunin, 2020, Harwood and Drummond, 2016, Fu et al., 2019, 2022, Jayaram Subramanya et al., 2019]. While many of these algorithms make for efficient methods in practice, the Vamana construction [Jayaram Subramanya et al., 2019] stands out as it introduces a novel super-graph of the SNG that turns out to have provable theoretical properties. That super-graph is what Indyk and Xu [2023] call an α-shortcut reachable SNG, which we will review next. For brevity, though, we call this graph simply α-SNG. 6.4 Neighborhood Graphs
2401.09350#257
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
258
Fig. 6.9: The sets Bi and rings Ri in the proof of Theorem 6.8. # 6.4.1 From SNG to α-SNG Jayaram Subramanya et al. [2019] introduce a subtle adjustment to the SNG construction. In particular, suppose we are processing a node u, have already extracted the node v whose distance to u is minimal among the nodes in U (i.e., v = arg minw∈U δ(u, w)), and are now deciding which nodes to discard from U. In the standard SNG construction, we remove a node w for which δ(u, w) > δ(w, v). But in the modified construction, we instead discard a node w if δ(u, w) > αδ(w, v) for some α > 1. Note that, the case of α = 1 simply gives the standard SNG. Figure 6.8 shows a few examples of α-SNGs on a toy dataset.
2401.09350#258
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
259
That is what Indyk and Xu [2023] later refer to as an α-shortcut reachable graph. They define α-shortcut reachability as the property where, for any node u, we have that every other node w is either the target of an edge from u (so that (u, w) ∈ E), or that there is a node v such that (u, v) ∈ E and δ(u, w) ≥ αδ(w, v). Clearly, the graph constructed by the procedure above is α-shortcut reachable by definition. # 6.4.1.1 Analysis Indyk and Xu [2023] present an analysis of the α-SNG for a collection of vectors X with doubling dimension d◦ as defined in Definition 3.2.
2401.09350#259
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
260
Indyk and Xu [2023] present an analysis of the α-SNG for a collection of vectors X with doubling dimension d◦ as defined in Definition 3.2. For collections with a fixed doubling constant, Indyk and Xu [2023] state two bounds. One gives a bound on the degree of every node in an a-SNG. The other tells us the expected number of hops from any arbitrary entry node to an €-approximate solution to top-1 queries. The two bounds together give us an idea of the time complexity of Algorithm 3 over an a-SNG as well as its accuracy. Theorem 6.8 The degree of any node in an a-SNG is O((4a)% log A) if the collection YX has doubling dimension do and aspect ratio A = 6,./6*. 97 98 6 Graph Algorithms Proof. Consider a node u ∈ V. For each i ∈ [log2 ∆], define a ball centered at u with radius δ∗/2i: Bi = B(u, δ∗/2i). From this, construct rings Ri = Bi\Bi+1. See Figure 6.9 for an illustration.
2401.09350#260
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
261
Because ¥ has a constant doubling dimension, we can cover each R; with O((4a)*?) balls of radius 6,/a2'*?. By construction, two points in each of these cover balls are at most 6,/a2't! apart. At the same time, the distance from u to any point in a cover ball is at least 6,/2‘++. By construction, all points in a cover ball except one are discarded as we form u’s edges in the a-SNG. As such, the total number of edges from u is bounded by the total number of cover balls, which is O((4a)*° log A). ia Theorem 6.9 If G = (V,€) is an a-SNG for collection X, then Algorithm 3 with k = 1 returns an (34 + €)-approzimate top-1 solution by visiting O( log, wm) nodes.
2401.09350#261
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
262
Proof. Suppose q is a query point and u∗ = arg minu∈X δ(q, u). Further as- sume that the best-first-search algorithm is currently in node vi with distance δ(q, vi) to the query. We make the following observations: • By triangle inequality, we know that δ(vi, u∗) ≤ δ(vi, q) + δ(q, u∗); and, • By construction of the α-SNG, vi is either connected to u∗ or to another node whose distance to u∗ is shorter than δ(vi, u∗)/α. We can conclude that, the distance from q to the next node the algorithm visits, vi+1, is at most: δ(vi+1, q) ≤ δ(vi+1, u∗) + δ(u∗, q) δ(vi, u∗) α δ(vi, q) α By induction, we see that, if the entry node is s ∈ V: 4(s,q) : 5(vi,g) < + (a+ 1)6(q,u") Da j=l < ea) . a+ 5a u’). (6.1)
2401.09350#262
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
263
There are three cases to consider. Case 1: When δ(s, q) > 2δ∗, then by triangle inequality, δ(q, u∗) > δ(s, q) − δ(s, u∗) > δ(s, q) − δ∗ > δ(s, q)/2. Plugging this into Equation (6.1) yields: 6.4 Neighborhood Graphs Paw) OAT ig, 5(vi, g) < 2 d(q,u*) ~ a 6(v;,q) < ai As such, for any € > 0, the algorithm returns a (<4 in log, 2/e steps. # atl # a-l (<4 +e)-approximate solution As such, for any € > 0, the algorithm returns a (<4 +e)-approximate solution in log, 2/e steps. Case 2: 6(s,q) < 26, and 6(q,u*) > jane” By Equation (6.1), the algorithm returns a (4 + €)-approximate solution as soon as 6(s,q)/a’ < €0(q,u*). So in this case:
2401.09350#263
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
264
δ(vi, q) δ(q, u∗) ≤ ≤ 2δ∗ αiδ(q, u∗) 8(α + 1)δ∗ αi(α − 1)δ∗ + + α + 1 α − 1 α + 1 α − 1 . . As such, the number of steps to reach the approximation level is log, Sete which is O(log, A/(a — 1)e). 4(α+1) δ∗. Suppose vi ̸= u∗. Observe that: (a) δ(vi, u∗) ≥ δ∗; (b) δ(vi, q) > δ(q, u∗); and (c) δ(q, u∗) < δ∗/2 by as- sumption. As such, triangle inequality gives us: δ(vi, q) > δ(vi, u∗)−δ(u∗, q) > δ∗ − δ∗/2 = δ∗/2. Together with Equation (6.1), we obtain: 2δ∗ αi + =⇒ αi ≤ 8∆ =⇒ i ≤ logα 8∆. The three cases together give the desired result.
2401.09350#264
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
265
2δ∗ αi + =⇒ αi ≤ 8∆ =⇒ i ≤ logα 8∆. The three cases together give the desired result. In addition to the bounds above, Indyk and Xu [2023] present negative re- sults for other major SNG-based graph algorithms by proving (via contrived examples) linear-time lower-bounds on their performance. These results to- gether show the significance of the pruning parameter α in the α-SNG con- struction. # 6.4.1.2 Practical Construction of α-SNGs The algorithm described earlier to construct an α-SNG for m points has O(m3) time complexity. That is too expensive for even moderately large val- ues of m. That prompted Jayaram Subramanya et al. [2019] to approximate the α-SNG by way of heuristics. The starting point in the approximate construction is a random R-regular graph: Every node is connected to R other nodes selected at random. The algorithm then processes each node in random order as follows. Given node u, 99 ⊓⊔ 100 6 Graph Algorithms
2401.09350#265
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
266
99 ⊓⊔ 100 6 Graph Algorithms it begins by searching the current snapshot of the graph for the top L nodes for the query point u, using Algorithm 3. Denote the returned set of nodes by S. It then performs the pruning algorithm by setting U = S \ {u}, rather than U = V \ {u}. That is the gist of the modified construction procedure.2 Naturally, we lose all guarantees for approximate top-k retrieval as a re- sult [Indyk and Xu, 2023]. We do, however, obtain a more practical algorithm instead that, as the authors show, is both efficient and effective. # 6.5 Closing Remarks This chapter deviated from the pattern we got accustomed to so far in the monograph. The gap between theory and practice in Chapters 4 and 5 was narrow or none. That gap is rather wide, on the other hand, in graph-based retrieval algorithms. Making theory work in practice required a great deal of heuristics and approximations. Another major departure is the activity in the respective bodies of litera- ture. Whereas trees and hash families have reached a certain level of maturity, the literature on graph algorithms is still evolving, actively so. A quick search through scholarly articles shows growing interest in this class of algorithms. This monograph itself presented results that were obtained very recently.
2401.09350#266
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
267
There is good reason for the uptick in research activity. Graph algorithms are among the most successful algorithms there are for top-k vector retrieval. They are often remarkably fast during retrieval and produce accurate solution sets. That success makes it all the more enticing to improve their other charac- teristics. For example, graph indices are often large, requiring far too much memory. Incorporating compression into graphs, therefore, is a low-hanging fruit that has been explored [Singh et al., 2021] but needs further investiga- tion. More importantly, finding an even sparser graph without losing accuracy is key in reducing the size of the graph to begin with, and that boils down to designing better heuristics. Heuristics play a key role in the construction time of graph indices too. Building a graph index for a collection of billions of points, for example, is not feasible for the variant of the Vamana algorithm that offers theoretical guarantees. Heuristics introduced in that work lost all such guarantees, but made the graph more practical. Enhancing the capabilities of graph indices too is an important practical consideration. For example, when the graph is too large and, so, must rest on disk, optimizing disk access is essential in maintaining the speed of query processing [Jayaram Subramanya et al., 2019]. When the collection of vectors
2401.09350#267
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
268
2 We have omitted minor but important details of the procedure in our prose. We refer the interested reader to [Jayaram Subramanya et al., 2019] for a description of the full algorithm. # References References is live and dynamic, the graph index must naturally handle deletions and insertions in real-time [Singh et al., 2021]. When vectors come with metadata and top-k retrieval must be constrained to the vectors that pass a certain set of metadata filters, then a greedy traversal of the graph may prove sub- optimal [Gollapudi et al., 2023]. All such questions warrant extensive (often applied) research and go some way to make graph algorithms more attractive to production systems. There is thus no shortage of practical research questions. However, the aforementioned gap between theory and practice should not dissuade us from developing better theoretical algorithms. The models that explained the small world phenomenon may not be directly applicable to top-k retrieval in high dimensions, but they inspired heuristics that led to the state of the art. Find- ing theoretically-sound edge sets that improve over the guarantees offered by Vamana could form the basis for other, more successful heuristics too. # References S. Arya and D. M. Mount. Approximate nearest neighbor queries in fixed dimensions. In Proceedings of the 4th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 271–280, 1993.
2401.09350#268
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
269
Y. Bachrach, Y. Finkelstein, R. Gilad-Bachrach, L. Katzir, N. Koenigstein, N. Nice, and U. Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender Systems, page 257–264, 2014. O. Beaumont, A.-M. Kermarrec, L. Marchal, and E. Riviere. Voronet: A scalable object network based on voronoi tessellations. In 2007 IEEE In- ternational Parallel and Distributed Processing Symposium, pages 1–10, 2007a. O. Beaumont, A.-M. Kermarrec, and ´E. Rivi`ere. Peer to peer multidimen- sional overlays: Approximating complex structures. In Principles of Dis- tributed Systems, pages 315–328, 2007b. M. Brito, E. Ch´avez, A. Quiroz, and J. Yukich. Connectivity of the mutual k-nearest-neighbor graph in clustering and outlier detection. Statistics & Probability Letters, 35(1):33–42, 1997.
2401.09350#269
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
270
E. Ch´avez and E. S. Tellez. Navigating k-nearest neighbor graphs to solve nearest neighbor searches. In Proceedings of the 2nd Mexican Conference on Pattern Recognition: Advances in Pattern Recognition, pages 270–280, 2010. J. Chen, H.-r. Fang, and Y. Saad. Fast approximate knn graph construc- tion for high dimensional data via recursive lanczos bisection. Journal of Machine Learning Research, 10:1989–2012, 12 2009. 101 102 6 Graph Algorithms M. Connor and P. Kumar. Fast construction of k-nearest neighbor graphs for point clouds. IEEE Transactions on Visualization and Computer Graphics, 16(4):599–608, 2010. B. Delaunay. Sur la sph`ere vide. Bulletin de l’Acad´emie des Sciences de l’URSS. Classe des sciences math´ematiques et na, 1934(6):793–800, 1934. W. Dong, C. Moses, and K. Li. Efficient k-nearest neighbor graph construc- tion for generic similarity measures. In Proceedings of the 20th Interna- tional Conference on World Wide Web, pages 577–586, 2011.
2401.09350#270
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
271
H. Edelsbrunner and N. R. Shah. Incremental topological flipping works for In Proceedings of the 8th Annual Symposium on regular triangulations. Computational Geometry, pages 43–52, 1992. S. Fortune. Voronoi Diagrams and Delaunay Triangulations, pages 377–388. CRC Press, Inc., 1997. C. Fu and D. Cai. Efanna : An extremely fast approximate nearest neighbor search algorithm based on knn graph, 2016. C. Fu, C. Xiang, C. Wang, and D. Cai. Fast approximate nearest neighbor search with the navigating spreading-out graph. Proceedings of the VLDB Endowment, 12(5):461–474, 1 2019. C. Fu, C. Wang, and D. Cai. High dimensional similarity search with satellite system graph: Efficiency, scalability, and unindexed query compatibility. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4139–4150, 2022.
2401.09350#271
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
272
S. Gollapudi, N. Karia, V. Sivashankar, R. Krishnaswamy, N. Begwani, S. Raz, Y. Lin, Y. Zhang, N. Mahapatro, P. Srinivasan, A. Singh, and H. V. Simhadri. Filtered-diskann: Graph algorithms for approximate near- est neighbor search with filters. In Proceedings of the ACM Web Conference 2023, pages 3406–3416, 2023. L. Guibas and J. Stolfi. Primitives for the manipulation of general subdivi- sions and the computation of voronoi. ACM Transactions on Graphics, 4 (2):74–123, 04 1985. L. J. Guibas, D. E. Knuth, and M. Sharir. Randomized incremental con- struction of delaunay and voronoi diagrams. Algorithmica, 7(1–6):381–413, 3 1992. K. Hajebi, Y. Abbasi-Yadkori, H. Shahbazi, and H. Zhang. Fast approximate nearest-neighbor search with k-nearest neighbor graph. In Twenty-Second International Joint Conference on Artificial Intelligence, 2011.
2401.09350#272
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
273
B. Harwood and T. Drummond. Fanng: Fast approximate nearest neigh- bour graphs. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 5713–5722, 2016. P. Indyk and H. Xu. Worst-case performance of popular approximate nearest neighbor search implementations: Guarantees and limitations. In Proceed- ings of the 36th Conference on Neural Information Processing Systems, 2023. J. Jaromczyk and G. Toussaint. Relative neighborhood graphs and their relatives. Proceedings of the IEEE, 80(9):1502–1517, 1992. References S. Jayaram Subramanya, F. Devvrit, H. V. Simhadri, R. Krishnawamy, and R. Kadekodi. Diskann: Fast accurate billion-point nearest neighbor search on a single node. In Advances in Neural Information Processing Systems, volume 32, 2019. S. M. Jeffrey Travers. An experimental study of the small world problem. Sociometry, 32(4):425–443, 1969. J. Kleinberg. The small-world phenomenon: An algorithmic perspective. In Proceedings of the 32nd Annual ACM Symposium on Theory of Computing, pages 163–170, 2000.
2401.09350#273
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
274
J. Kleinberg. The small-world phenomenon: An algorithmic perspective. In Proceedings of the 32nd Annual ACM Symposium on Theory of Computing, pages 163–170, 2000. W. Li, Y. Zhang, Y. Sun, W. Wang, M. Li, W. Zhang, and X. Lin. Approx- imate nearest neighbor search on high dimensional data — experiments, analyses, and improvement. IEEE Transactions on Knowledge and Data Engineering, 32(8):1475–1488, 2020. J. Liu, X. Yan, X. DAI, Z. Li, J. Cheng, and M. Yang. Understanding and improving proximity graph based maximum inner product search. In AAAI Conference on Artificial Intelligence, 2019. Y. Malkov, A. Ponomarenko, A. Logvinov, and V. Krylov. Approximate near- est neighbor algorithm based on navigable small world graphs. Information Systems, 45:61–68, 2014. Y. A. Malkov and D. A. Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):824–836, 4 2020.
2401.09350#274
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
275
S. Milgram. The Small-World Problem. Psychology Today, 1(1):61–67, 1967. S. Morozov and A. Babenko. Non-metric similarity graphs for maximum inner product search. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 4726–4735, 2018. G. Navarro. Searching in metric spaces by spatial approximation. The VLDB Journal, 11(1):28–46, 08 2002. J. O’Rourke. Computing the relative neighborhood graph in the l1 and l∞ metrics. Pattern Recognition, 15(3):189–192, 1982. A. Singh, S. J. Subramanya, R. Krishnaswamy, and H. V. Simhadri. Freshdiskann: A fast and accurate graph-based ann index for streaming similarity search, 2021. G. T. Toussaint. The relative neighbourhood graph of a finite planar set. Pattern Recognition, 12(4):261–268, 1980. P. M. Vaidya. Ano(n logn) algorithm for the all-nearest-neighbors problem. Discrete and Computational Geometry, 4(2):101–115, 12 1989.
2401.09350#275
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
276
M. Wang, X. Xu, Q. Yue, and Y. Wang. A comprehensive survey and exper- imental comparison of graph-based approximate nearest neighbor search. Proceedings of the VLDB Endowment, 14(11):1964–1978, jul 2021. Z. Zhou, S. Tan, Z. Xu, and P. Li. M¨obius transformation for fast inner product search on graph. In Advances in Neural Information Processing Systems, volume 32, 2019. 103 Chapter 7 Clustering Abstract We have seen index structures that manifest as trees, hash tables, and graphs. In this chapter, we will introduce a fourth way of organizing data points: clusters. It is perhaps the most natural and the simplest of the four methods, but also the least theoretically-justified. We will see why that is as we describe the details of clustering-based algorithms to top-k retrieval. # 7.1 Algorithm
2401.09350#276
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
277
# 7.1 Algorithm As usual, we begin by indexing a collection of m data points X ⊂ Rd. Except in this paradigm, that involves invoking a clustering function, ζ : Rd → [C], that is appropriate for the distance function δ(·, ·), to map every data point to one of C clusters, where C is an arbitrary parameter. A typical choice for ζ is the KMeans algorithm with C = O( m). We then organize X into a table whose row i records the subset of points that are mapped to the i-th cluster: ζ −1(i) ≜ {u | u ∈ X , ζ(u) = i}. Accompanying the index is a routing function τ : Rd → [C]ℓ. It takes an arbitrary point q as input and returns ℓ clusters that are more likely to contain the nearest neighbor of q with respect to δ. In a typical instance of this framework τ (·) is defined as follows: 1 IC-*()| (4) T(q) = argmino| q, i€(C] Ss “). (7.1) uec—*(i) Bi
2401.09350#277
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
279
Fig. 7.1: Illustration of the clustering-based retrieval method. The collection of points (left) is first partitioned into clusters (regions enclosed by dashed boundary on the right). When processing a query q using Equation (7.1), we compute δ(q, ·) for the centroid (solid circles) of every cluster and conduct our search over the ℓ “closest” clusters. When processing a query q, we take a two-step approach. We first obtain the list of clusters returned by τ (q), then solve the top-k retrieval problem over the union of the identified clusters. Figure 7.1 visualizes this procedure. Notice that, the search for top-ℓ clusters by using Equation (7.1) and the secondary search over the clusters identified by τ are themselves instances of the approximate top-k retrieval problem. The parameter C determines the amount of effort that must be spent in each of the two phases of search: When C = 1, the cluster retrieval problem is solved trivially, whereas as C → ∞, cluster retrieval becomes equivalent to top-k retrieval over the entire collection. Interestingly, these operations can be delegated to a subroutine that itself uses a tree-, hash-, graph-, or even a clustering-based solution. That is, a clustering-based approach can be easily paired with any of the previously discussed methods!
2401.09350#279
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
280
This simple protocol—with some variant of KMeans as ζ and τ as in Equa- tion (7.1)—works well in practice [Auvolat et al., 2015, J´egou et al., 2011, Bruch et al., 2023b, Babenko and Lempitsky, 2012, Chierichetti et al., 2007]. We present the results of our own experiments on various real-world datasets in Figure 7.2. This method owes its success to the empirical phenomenon that real-world data points tend to follow a multi-modal distribution, natu- rally forming clusters around each mode. By identifying these clusters and grouping data points together, we reduce the search space at the expense of retrieval quality. However, to date, no formal analysis has been presented to quantify the retrieval error. The choice of ζ and τ , too, have been left largely unexplored, with KMeans and Equation (7.1) as default answers. It is, for example, not known if KMeans is the right choice for a given δ. Or, whether clustering with spillage, where each data point may belong to multiple clusters, might reduce the overall error, as it did in Spill Trees. It is also an open question if, for a particular choice of ζ and δ, there exists a more effective routing 7.2 Closing Remarks 107
2401.09350#280
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
281
7.2 Closing Remarks 107 ‘CURACY 0.6) a ) “peerin a Feven-MiNLM B04 -@-QUORA-MINILM > Feven-TasB -@-NQ-MiniLM- MS Manco Passace-MiniLM + NO-TasB 1% 3% 1% 2% ah ah me AS PERCENT OF C 1.0 a ) aan Deeris—4-Gisr =O-GL0VE-200 AMS Tense srr 1% 5% 1% 3% cu; 3% AS PERCENT OF C (a) MIPS (b) NN Fig. 7.2: Performance of the clustering-based retrieval method on various real- world collections, described in Appendix A. The figure shows top-1 accuracy versus the number of clusters, ℓ, considered by the routing function τ (·) as a m, percentage of the number of clusters C. In these experiments, we set C = where m = |X | is the size of the collection, and use spherical KMeans (MIPS) and standard KMeans (NN) to form clusters. function—including learnt functions tailored to a query distribution—that uses higher-order statistics from the cluster distributions.
2401.09350#281
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
282
function—including learnt functions tailored to a query distribution—that uses higher-order statistics from the cluster distributions. In spite of these shortcomings, the algorithmic framework above contains a fascinating insight that is actually useful for a rather different end-goal: vector compression, or more precisely, quantization. We will unpack this connection in Chapter 9. # 7.2 Closing Remarks This chapter departed entirely from the theme of this monograph. Whereas we are generally able to say something intelligent about trees, hash functions, and graphs, top-k retrieval by clustering has emerged entirely based on our intuition that data points naturally form clusters. We cannot formally deter- mine, for example, the behavior of the retrieval system as a function of the clustering algorithm itself, the number of clusters, or the routing function. All that must be determined empirically. What we do observe often in practice, however, is that clustering-based top-k retrieval is efficient [Paulev´e et al., 2010, Auvolat et al., 2015, Bruch et al., 2023b, J´egou et al., 2011], at least in the case of Nearest Neighbor search with Euclidean distance, where KMeans is a theoretically appropriate choice. It is efficient in the sense that retrieval accuracy often reaches an acceptable level after probing a few top-ranking clusters as identified by Equation (7.1). 108 7 Clustering
2401.09350#282
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
283
108 7 Clustering That we have a method that is efficient in practice, but its efficiency and the conditions under which it is efficient are unexplained, constitutes a sub- stantial gap and thus presents multiple consequential open questions. These questions involve optimal clustering, routing, and bounds on retrieval accu- racy. When the distance function is the Euclidean distance and our objective is to learn the Voronoi regions of data points, the KMeans clustering objective makes sense. We can even state formal results regarding the optimality of the resulting clustering [Arthur and Vassilvitskii, 2007]. That argument is no longer valid when the distance function is based on inner product, where we must learn the inner product Voronoi cones, and where some points may have an empty Voronoi region. What objective we must optimize for MIPS, therefore, is an open question that, as we saw in this chapter, has been partially explored in the past [Guo et al., 2020].
2401.09350#283
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
284
Even when we know what the right clustering algorithm is, there is still the issue of “balance” that we must understand how to handle. It would, for example, be far from ideal if the clusters end up having very different sizes. Unfortunately, that happens quite naturally if the data points have highly variable norms and the clustering algorithm is based on KMeans: Data points with large norms become isolated, while vectors with small norms form massive clusters. What has been left entirely untouched is the routing machinery. Equa- tion (7.1) is the de facto routing function, but one that is possibly sub- optimal. That is because, Equation (7.1) uses the mean of the data points within a cluster as the representative or sketch of that cluster. When clusters are highly concentrated around their mean, such a sketch accurately reflects the potential of each cluster. But when clusters have different shapes, higher- order statistics from the cluster may be required to accurately route queries to clusters. So the question we are faced with is the following: What is a good sketch of each cluster? Is there a coreset of data points within each cluster that lead to better routing of queries during retrieval? Can we quantify the probability of error—in the sense that the cluster containing the optimal solution is not returned by the routing function—given a sketch?
2401.09350#284
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
285
We may answer these questions differently if we had some idea of what the query distribution looks like. Assuming access to a set of training queries, it may be possible to learn a more optimal sketch using supervised learning methods. Concepts from learning-to-rank [Bruch et al., 2023a] seem particu- larly relevant to this setup. To see how, note that the outcome of the routing function is to identify the cluster that contains the optimal data point for a query. We could view this as ranking clusters with respect to a query, where we wish for the “correct” cluster to appear at the top of the ranked list. Given this mental model, we can evaluate the quality of a routing function using # References References any of the many ranking quality metrics such as Reciprocal Rank (defined as the reciprocal of the rank of the correct cluster). Learning a ranking function that maximizes Reciprocal Rank can then be done indirectly by optimizing a custom cross entropy-based surrogate, as proved by Bruch et al. [2019] and Bruch [2021]. Perhaps the more important open question is understanding when clus- tering is efficient and why. Answering that question may require exploring the connection between clustering-based top-k retrieval, branch-and-bound algorithms, and LSH.
2401.09350#285
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
286
Take any clustering algorithm, ζ. If one could show formally that ζ behaves like an LSH family, then clustering-based top-k retrieval simply collapses to LSH. In that case, not only do the results from that literature apply, but the techniques developed for LSH (such as multi-probe LSH) too port over to clustering. Similarly, one may adopt the view that finding the top cluster is a series of decisions, each determining which side of a hyperplane a point falls. Whereas in Random Partition Trees or Spill Trees, such decision hyperplanes were random directions, here the hyperplanes are correlated. Nonetheless, that insight could help us produce clusters with spillage, where data points belong to multiple clusters, and in a manner that helps reduce the overall error. # References D. Arthur and S. Vassilvitskii. K-means++: The advantages of careful seed- ing. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1027–1035, 2007. A. Auvolat, S. Chandar, P. Vincent, H. Larochelle, and Y. Bengio. Clustering is efficient for approximate maximum inner product search, 2015.
2401.09350#286
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
287
A. Babenko and V. Lempitsky. The inverted multi-index. In 2012 IEEE Con- ference on Computer Vision and Pattern Recognition, pages 3069–3076, 2012. S. Bruch. An alternative cross entropy loss for learning-to-rank. In Proceed- ings of the Web Conference 2021, page 118–126, 2021. S. Bruch, X. Wang, M. Bendersky, and M. Najork. An analysis of the soft- max cross entropy loss for learning-to-rank with binary relevance. In Pro- ceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval, page 75–78, 2019. S. Bruch, C. Lucchese, and F. M. Nardini. Efficient and effective tree-based and neural learning to rank. Foundations and Trends in Information Re- trieval, 17(1):1–123, 2023a. S. Bruch, F. M. Nardini, A. Ingber, and E. Liberty. Bridging dense and sparse maximum inner product search, 2023b. 109 110 7 Clustering
2401.09350#287
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
288
109 110 7 Clustering F. Chierichetti, A. Panconesi, P. Raghavan, M. Sozio, A. Tiberi, and E. Upfal. Finding near neighbors through cluster pruning. In Proceedings of the Twenty-Sixth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pages 103–112, 2007. R. Guo, P. Sun, E. Lindgren, Q. Geng, D. Simcha, F. Chern, and S. Kumar. Accelerating large-scale inference with anisotropic vector quantization. In Proceedings of the 37th International Conference on Machine Learning, 2020. H. J´egou, M. Douze, and C. Schmid. Product quantization for nearest neigh- bor search. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 33(1):117–128, 2011. L. Paulev´e, H. J´egou, and L. Amsaleg. Locality sensitive hashing: A compar- ison of hash function types and querying mechanisms. Pattern Recognition Letters, 31(11):1348–1358, 2010. # Chapter 8 Sampling Algorithms
2401.09350#288
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
289
# Chapter 8 Sampling Algorithms Abstract Nearly all of the data structures and algorithms we reviewed in the previous chapters are designed specifically for either nearest neighbor search or maximum cosine similarity search. MIPS is typically an afterthought. It is often cast as NN or MCS through a rank-preserving transformation and subsequently solved using one of these algorithms. That is so because inner product is not a proper metric, making MIPS different from the other vector retrieval variants. In this chapter, we review algorithms that are specifically designed for MIPS and that connect MIPS to the machinery underlying multi- arm bandits. # 8.1 Intuition That inner product is different can be a curse and a blessing. We have already discussed that curse at length, but in this chapter, we will finally learn some- thing positive. And that is the fact that inner product is a linear function of data points and can be easily decomposed into its parts, thereby opening a unique path to solving MIPS. The overarching idea in what we refer to as sampling algorithms is to avoid computing inner products. Instead, we either directly approximate the likelihood of a data point being the solution to MIPS (or, equivalently, its rank), or estimate its inner product with a query (i.e., its score). As we will see shortly, in both instances, we rely heavily on the linearity of inner product to estimate probabilities and derive bounds.
2401.09350#289
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
290
Approximating the ranks or scores of data points uses some form of sam- pling: we either sample data points according to a distribution defined by inner products, or sample a dimension to compute partial inner products with and eliminate sub-optimal data points iteratively. In the former, the more frequently a data point is sampled, the more likely it is to be the so- lution to MIPS. In the latter, the more dimensions we sample, the closer we 111 112 8 Sampling Algorithms get to computing full inner products. Generally, then, the more samples we draw, the more accurate our solution to MIPS becomes. An interesting property of using sampling to solve MIPS is that, regard- less of what we are approximating, we can decide when to stop! That is, if we are given a time budget, we draw as many samples as our time budget allows and return our best guess of the solutions based on the information we have collected up to that point. The number of samples, in other words, serves as a knob that trades off accuracy for speed. The remainder of this chapter describes these algorithms in much greater detail. Importantly, we will see how linearity makes the approximation- through-sampling feasible and efficient. # 8.2 Approximating the Ranks
2401.09350#290
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
291
# 8.2 Approximating the Ranks We are interested in finding the top-k data points with the largest inner product with a query q ∈ Rd, from a collection X ⊂ Rd of m points. Suppose that we had an efficient way of sampling a data point from X where the point u ∈ X has probability proportional to ⟨q, u⟩ of being selected. If we drew a sufficiently large number of samples, the data point with the largest inner product with q would be selected most frequently. The data point with the second largest inner product would similarly be selected with the second highest frequency, and so on. So, if we counted the number of times each data point has been sampled, the resulting histogram would be a good approximation to the rank of each data point with respect to inner product with q. That is the gist of the sampling algorithm we examine in this section. But while the idea is rather straightforward, making it work requires addressing a few critical gaps. The biggest challenge is drawing samples according to the distribution of inner products without actually computing any of the inner products! That is because, if we needed to compute ⟨q, u⟩ for all u ∈ X , then we could simply sort data points accordingly and return the top-k; no need for sampling and the rest.
2401.09350#291
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
292
The key to tackling that challenge is the linearity of inner product. Follow- ing a few simple derivations using Bayes’ theorem, we can break up the sam- pling procedure into two steps, each using marginal distributions only [Loren- zen and Pham, 2021, Ballard et al., 2015, Cohen and Lewis, 1997, Ding et al., 2019]. Importantly, one of these marginal distributions can be computed of- fline as part of indexing. That is the result we will review next. 8.2 Approximating the Ranks # 8.2.1 Non-negative Data and Queries We wish to draw a data point with probability that is proportional to its inner product with a query: P[u | q] ∝ ⟨q, u⟩. For now, we assume that u, q ⪰ 0 for all u ∈ X and queries q. Let us decompose this probability along each dimension as follows: d Plu| ql = >> Plt | J Ple| ta), (8.1) t=1 where the first term in the sum is the probability of sampling a dimension t ∈ [d] and the second term is the likelihood of sampling u given a particular dimension. We can model each of these terms as follows: Plt|qlx Ss Us = Ss Uts (8.2) UuEx Ucrk and,
2401.09350#292
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
293
Plt|qlx Ss Us = Ss Uts (8.2) UuEx Ucrk and, Plu At | q] ur ut x . Plt | q] Ut ocx Ut ocx Ut Plu | tq] (8.3) ut v∈X vt v∈X vt ̸= 0; if that sum is 0 we can Plt | q] Ut In the above, we have assumed that },,<y simply discard the t-th dimension. In the above, we have assumed that },,<y vr 4 0; if that sum is 0 we can simply discard the t-th dimension. What we have done above allows us to draw a sample according to P[u | q] by, instead, drawing a dimension t according to P[t | q] first, then drawing a data point u according to P[u | t, q]. Sampling from these multinomial distribution requires constructing the distributions themselves. Luckily, P[u | t, q] is independent of q. Its distribu- tion can therefore be computed offline: we create d tables, where the t-th table has m rows recording the probability of each data point being selected given dimension t using Equation (8.3). We can then use the alias method [Walker, 1977] to draw samples from these distributions using O(1) operations.
2401.09350#293
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
294
The distribution over dimensions given a query, P{t | g], must be computed online using Equation (8.2), which requires O(d) operations, assuming we compute >> ,,¢y Ur offline for each t and store them in our index. Again, using he alias method, we can subsequently draw samples with O(1) operations. The procedure described above provides us with an efficient mechanism to perform the desired sampling. If we were to draw S samples, that could be done in O(d + $), where O(d) term is needed to construct the multinomial distribution that defines P{t | q]. As we draw samples, we maintain a histogram over the m data points, counting the number of times each point has been sampled. In the end, we can identify the top-k′ (for k′ ≥ k) points based on these counts, compute their inner products with the query, and return the top-k points as the final 113 114 8 Sampling Algorithms solution set. All these operations together have time complexity O(d + S + m log k′ + k′d), with S typically being the dominant term. # 8.2.2 The General Case When the data points or queries may be negative, the algorithm described in the previous section will not work as is. To extend the sampling framework to general, real vectors, we must make a few minor adjustments.
2401.09350#294
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
295
First, we must ensure that the marginal distributions are valid. That is easy to do: In Equations (8.2) and (8.3), we replace each term with its absolute value. So, P[t | g] becomes proportional to }>,<y|qeus|, and Plu | toa] & lusl/ Cyexltel We then use the resulting distributions to sample data points as before, but every time a data point u is sampled, instead of incrementing its count in the histogram by one, we add Sign(qtut) to its entry. As the following lemma shows, in expectation, the final count is proportional to ⟨q, u⟩. Lemma 8.1 Define the random variable Z as 0 if data point u € X is not sampled and SIGN(quz) if it is for a query q € R¢ and a sampled dimension t. Then E[Z] = (q,u)/ Diy Dexia: Proof. ry us| uo Doves te _ SIGN(qut) lquue| _ deur ocalare| ocx luv E[Z | t] = SIGN(qruz) P{u | t] = Sicn( Taking expectation over the dimension t yields:
2401.09350#295
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
296
Taking expectation over the dimension t yields: E[Z] =E(E(Z| 4] = Ya Plt | 4] vex |dUe| d UU Dover lave ,] a t=1 Vvea lure Viet Voex lari (q,u) ~Sd a), Viet Vocal urel ⊓⊔ 8.2 Approximating the Ranks # 8.2.3 Sample Complexity We have formalized an efficient way to sample data points according to the distribution of inner products, and subsequently collect the most frequently- sampled points. But how many samples must we draw in order to accurately identify the top-k solution set? Ding et al. [2019] give an answer in the form of the following theorem for top-1 MIPS. Before stating the result, it would be helpful to introduce a few shorthands. Let N = al vex |ave| be a normalizing factor. For a vector u € ¥, denote by A, the scaled gap between the maximum inner product and the inner product of u and q: A, = (q,u* — u)/N.
2401.09350#296
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
297
If S$ is the number of samples to be drawn, for a vector u, denote by Z,,i a random variable that is 0 if u was not sampled in round 7, and otherwise SIGN(qu,) if t is the sampled dimension. Once the sampling has concluded, the final value for point wu is simply Z, = >; Zu,:. Note that, from Lemma 8.1, we have that E[Z,,;] = (q,u)/N. Given the notation above, let us also introduce the following helpful let us also introduce the following helpful lemma. Lemma 8.2 Let Cy = ye latte for a data point u. Then for a pair of distinct vectors u,v € X: Cut Cy E[(Zus- Zea) | = 3, and, Cut Cy — (g,u-v)? Var [2 — Z| = $| yo |. Proof. The proof is similar to the proof of Lemma 8.1. Theorem 8.1 Suppose u* is the exact solution to MIPS over m points in X for query q. Define 02 = Var [Zu _ Z| and let A = minuex Ay. For 6 € (0,1), if we drew S samples such that:
2401.09350#297
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
298
S ≥ max u̸=u∗ (1 + ∆u)2 uh( ∆u(1+∆u) σ2 σ2 u ) log m δ , where h(x) = (1 + x) log(1 + x) − x, then P[Zu∗ > Zu ∀ u ̸= u∗] ≥ 1 − δ. Before proving the theorem above, let us make a quick observation. Clearly σ2 u ≤ O(d∆u) and (1 + ∆u) ≈ 1. Because h(·) is monotone increasing in its argument ( ∂h 115 ⊓⊔ 116 8 Sampling Algorithms ay Ae to) = (62 4 A+ A,)) log (1+ HOEY) — a4 A.) _ AL + A)? AL A aw AT > F+ a= 015). Plugging this into Theorem 8.1 gives us S ≤ O( d # ∆ log m δ ).
2401.09350#298
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
299
Plugging this into Theorem 8.1 gives us S ≤ O( d # ∆ log m δ ). Theorem 8.1 tells us that, if we draw O( d δ ) samples, we can iden- tify the top-1 solution to MIPS with high probability. Observe that, ∆ is a measure of the difficulty of the query: When inner products are close to each other, ∆ becomes smaller, implying that a larger number of samples would be needed to correctly identify the exact solution. Proof of Theorem 8.1. Consider the probability that the registered value of a data point u is greater than or equal to the registered value of the solution u∗ once sampling has concluded. That is, P[Zu ≥ Zu∗ ]. Let us rewrite that quantity as follows: P [Zu > Zu-| =P [x Lua - Zuri > 0| Ss Yua Yu Notice that E[Y,,,;] = 0 and that Y,,,;’s are independent. Furthermore, Y,,,; < 1+ A,. Letting Y, = 30; Yu, we can apply Bennett’s inequality to bound the probability above: 32 P[Y. > yu) < exp ( _ 5% a(S *840)), a+A, So2 u Setting the right-hand-side to δ m , we arrive at:
2401.09350#299
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
300
Setting the right-hand-side to δ m , we arrive at: So (1+ Ay)Au 6 vol a+ Teh 2 ) < m u Aull + Au) m = S(1+ Au) ?o2h( > log 5: o It is easy to see that for x > 0, h(x) > 0. Observing that ∆u(1 + ∆u)/σ2 u is positive, that implies that h(∆u(1 + ∆u)/σ2 u) > 0, and therefore we can re-arrange the expression above as follows: 8.3 Approximating the Scores 1+A,)? S> — + Au) log ™. (8.4) o3h( Aa4e)) ) We have thus far shown that when S satisfies the inequality in (8.4), then m . Going back to the claim, we derive the following bound P[Yu ≥ yu] ≤ δ using the result above: P[Zu∗ > Zu ∀u ∈ X ] = 1 − P[∃ u ∈ X s.t. Zu∗ ≤ Zu] ≥ 1 − m δ m = 1 − δ, where we have used the union bound to obtain the inequality. # 8.3 Approximating the Scores
2401.09350#300
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
301
where we have used the union bound to obtain the inequality. # 8.3 Approximating the Scores The method we have just presented avoids the computation of inner products altogether but estimates the rank of each data point with respect to a query using a sampling procedure. In this section, we introduce another sampling method that approximates the inner product of every data point instead. Let us motivate our next algorithm with a rather contrived example. Sup- pose that our data points and queries are in R2, with the first coordinate of vectors drawing values from N (0, σ2 1) and the second coordinate from N (0, σ2 2). If we were to compute the inner product of q with every vector u ∈ X , we would need to perform two multiplications and a sum: u1q1 + u2q2. That gives us the exact “score” of every point with respect to q. But if σ2 1 ≫ σ2 2, then by computing q1u1 for all u ∈ X , it is very likely that we have a good approximation to the final inner product. So we may use the partial inner product as a high-confidence estimate of the full inner product.
2401.09350#301
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
302
That is the core idea in this section. For each data point, we sample a few dimensions without replacement, and compute its partial inner product with the query along the chosen dimensions. Based on the scores so far, we can eliminate data points whose full inner product is projected, with high confidence, to be too small to make it to the top-k set. We then repeat the procedure by sampling more dimensions for the remaining data points, until we reach a stopping criterion. The process above saves us time by shrinking the set of data points and computing only partial inner products in each round. But we must decide how we should sample dimensions and how we should determine which data points to discard. The objective is to minimize the number of samples needed to iden- tify the solution set. These are the questions that Liu et al. [2019] answered in their work, which we will review next. We note that, even though Liu et al. [2019] use the Bandit language [Lattimore and Szepesv´ari, 2020] to describe 117 ⊓⊔ 118 8 Sampling Algorithms # Algorithm 4: The BoundedME algorithm for MIPS.
2401.09350#302
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
303
117 ⊓⊔ 118 8 Sampling Algorithms # Algorithm 4: The BoundedME algorithm for MIPS. Input: Query point q € R¢; k > 1 for top-k retrieval; confidence parameters ¢,6 € (0,1); and data points Y Cc R* Result: (1 — 5)-confident e-approximate top-k set to MIPS with respect to q. Li 1 2: Xi << X ; > Initialize the solution set to %. t 8: 9: 10: 11: 12: 13: 14: ei + § and 6; — & 2 Ay +0 Vue X;;> A is a score accumulator. o+ 0 while |%;| > k do tp onl 21 2(1:1-h) ~ (3 oe (iS) z for u € X% do Let J be (t; — t;-1) dimensions sampled without replacement Au Aut Djez Usd §> Compute partial inner product. end for Let a be the pee’) th score in A Kini — {u € X% s.t. Au > a} ei41 ei, bina &, andi¢+i+l 15: end while 16: return X; their algorithm, we find it makes for a clearer presentation if we avoided the Bandit terminology.
2401.09350#303
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
304
their algorithm, we find it makes for a clearer presentation if we avoided the Bandit terminology. # 8.3.1 The BoundedME Algorithm The top-k retrieval algorithm developed by Liu et al. [2019] is presented in Algorithm 4. It is important to note that, for the algorithm to be correct—as we will explain later—each partial inner product must be bounded. In other words, for query q, any data point u ∈ X , and any dimension t, we must have that qtut ∈ [a, b] for some fixed interval. This is not a restrictive assumption, however: q can always be normalized without affecting the solution to MIPS, and data points u can be scaled into the hypercube. In their work, Liu et al. [2019] assume that partial inner products are in the unit interval. This iterative algorithm begins with the full collection of data points and removes almost half of the data points in each iteration. It terminates as soon as the total number of data points left is at most k. In each iteration of the algorithm, it accumulates partial inner products for all remaining data point along a set of sampled dimensions. Once a dimension has been sampled, it is removed from consideration in all future iterations— hence, sampling without replacement. The number of dimensions to sample is adaptive and changes from it- eration to iteration. It is determined using the quantity on Line 7 of the
2401.09350#304
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
305
The number of dimensions to sample is adaptive and changes from it- eration to iteration. It is determined using the quantity on Line 7 of the 8.3 Approximating the Scores algorithm, where the function h(·) is defined as follows: (8.5) lta etal, A(z) = min { I+a/d’1+a/d At the end of iteration i with the remaining data points in Xi, the algo- rithm finds the ⌈ |Xi|−k ⌉-th (i.e., close to the median) partial inner product accumulated so far, and discards data points whose score is less than that threshold. It then updates the confidence parameters ϵ and δ, and proceeds to the next iteration. It is rather obvious that, the total number of dimensions along which the algorithm computes partial inner products for any given data point can never exceed d. That is simply because once Line 10 is executed, the dimensions in the set J defined on Line 9 are never considered for sampling in future iterations. As a result, in the worst case, the algorithm computes full inner products in O(md) operations.
2401.09350#305
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
306
As for the time complexity of Algorithm 4, it can be shown that it re- quires o(@4 flog(1/5)) operations. That is simply due to the fact that in each iteration, the number of data points is cut in half, combined with the inequality h(x) < O(Vdz) for x > 0. Theorem 8.2 The time complexity of Algorithm 4 is O( m √ o(2¥4 /log(1/5)). # d # ϵ Theorem 8.2 says that the time complexity of Algorithm 4 is linear in the number of data points m, but sub-linear in the number of dimensions d. That is a fundamentally different behavior than all the other algorithms we have presented thus far throughout the preceding chapters. Proof of Theorem 8.2. Let us first show the following claim: h(x) ≤ O( dx) for x > 0. To prove that, observe that h(x) is the minimum of two positive values a and b. As such, h(x) ≤ ab. Substituting a and b with the right expressions from Equation (8.5): √
2401.09350#306
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
307
√ l+a x+a/d _ 1 Jal Fay + t/a) ho < fe ipa x(1 + 2)(1+ 1/d) — Of) | dx ~ 1+2/d = OTF) < O(Vaz). Note that, in the i-th iteration there are at most m/2i data points to examine. Moreover, for each data point that is eliminated in round i, we will have computed at most ti partial inner products (see Line 7 of Algorithm 4). Using these facts, we can calculate the time complexity as follows: 119 120 8 Sampling Algorithms √ logm logm -—— m m mid 1 > Sih(ti) < > Vat, < 0 (M4 hog 5): # 8.3.2 Proof of Correctness Our goal in this section is to prove that Algorithm 4 is correct, in the sense that it returns the ϵ-approximate solution to k-MIPS with probability at least 1 − δ: Theorem 8.3 Algorithm 4 is guaranteed to return the ϵ-approximate solu- tion to k-MIPS with probability at least 1 − δ. The proof of Theorem 8.3 requires the concentration inequality due to Bar- denet and Maillard [2015], repeated below for completeness.
2401.09350#307
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
308
The proof of Theorem 8.3 requires the concentration inequality due to Bar- denet and Maillard [2015], repeated below for completeness. Lemma 8.3 Let J ⊂ [0, 1] be a finite set of size d with mean µ. Let {J1, J2, . . . , Jn} be n < d samples from J without replacement. Then for any n ≤ d and any δ ∈ [0, 1] it holds: P(t s Voi 85) >1-6, where ρn is defined as follows: . ne 1 7 f 1 Pn min {1 d (1 pu | )}. The lemma above guarantees that, with probability at least 1 − δ, the empirical mean of the samples does not exceed the mean of the universe by a specific amount that depends on δ. We now wish to adapt that result to derive a similar guarantee where the difference between means is bounded by an arbitrary parameter ϵ. That is stated in the following lemma. Lemma 8.4 Let J ⊂ [0, 1] be a finite set of size d with mean µ. Let {J1, J2, . . . , Jn} be n < d samples from J without replacement. Then for any ϵ, δ ∈ (0, 1), if we have that:
2401.09350#308
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
309
l+2 etal > mi ; n> min{ ap 1l+a/d where x = log(1/δ)/2ϵ2, then the following holds: PE a-n<d >1-6. t=1 ⊓⊔ 8.3 Approximating the Scores Proof. By Lemma 8.3 we can see that: so long as: log 1 δ ≤ ϵ =⇒ n ρn ≥ 1 2ϵ2 log 1 δ . There are two cases to consider. First, if ρn = 1 − (n − 1)/d, then: n 1 1 n > log = 2x Pn 2e? 85 joa" © xt+a/d => > : me 1+a/d In the second case, ρn = (1 − n/d)(1 + 1/n), which gives: ny Ly 1 n S > og = >x Pn ~ 22° 6 (a-8)a+4) x 1 1 = n>fir--"* | n d 25 n? n n nx +x — x x ~ d d = (1+ 5)n?—(w-5)n—2 20. )n − x ≥ 0.
2401.09350#309
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
310
20. )n − x ≥ 0. To make the closed-form solution more manageable, Liu et al. [2019] relax the problem above and solve n in the following problem instead. Note that, any solution to the problem below is a valid solution to the problem above. (1+ yn? -(@=“)n-2-130 = [a+ 5)n-2-1][n +1] 20 ‘ 1 + & — ne l+a/d By combining the two cases, we obtain: n ≥ min{ 1 + x 1 + x/d , x + x/d 1 + x/d }. 121 ⊓⊔ 122 8 Sampling Algorithms Lemma 8.4 gives us the minimum number of dimensions we must sample so that the partial inner product of a vector with a query is at most ϵ away from the full inner product, with probability at least 1 − δ. Armed with this result, we can now proceed to proving the main theorem. Proof of Theorem 8.3. Denote by ζi the k-th largest full inner product among the set of data points Xi in iteration i. If we showed that, for two consec- utive iterations, the difference between ζi and ζi+1 does not exceed ϵi with probability at least 1 − δi, that is:
2401.09350#310
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
311
P [<i — Gini < «| 21-46, (8.6) then the theorem immediately follows: P [61 — Gogm S¢] 21-6, because: logm log m 5 5 Y= La sd% and, ala Kee in Me alo a Kio Na L | logm log m # Equation So we focus on proving Equation (8.6). Suppose we are in the i-th iteration. Collect in Zϵi every data point in u ∈ Xi such that ζi − ⟨q, u⟩ ≤ ϵi. That is: Zϵi = {u ∈ Xi | ζi − ⟨q, u⟩ ≤ ϵi}. If at least k elements of Zϵi end up in Xi+1, the event ζi − ζi+1 ≤ ϵi succeeds. So, that event fails if there are more than ⌊ |Xi|−k ⌋ data points in Xi \ Zϵi with partial inner products that are greater than partial inner products of the data points in Zϵi . Denote the number of such data points by β.
2401.09350#311
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
312
What is the probability that a data point u in ¥; \ Z., has a higher partial inner product than any data point in Z,,? Assuming that u* is the data point that achieves ¢;, we can write: PA, >A, Vue Z.) < P[A. >A, | PA, >A, Vue Z.) < P[A. >A, | <P[A, > (gu) +S V Au SG 3] <P[A. > (qu) + S$] +P [Aw <6 - $]. We can apply Lemma 8.4 to obtain that, if the number of sampled dimensions is equal to the quantity on Line 7 of Algorithm 4, then the probability above would be bounded by: 8.4 Closing Remarks ⌊ |Xi|−k 2 ⌋ + 1 |Xi| − k δi. Using this result along with Markov’s inequality, we can bound the prob- ability that β is strictly greater than ⌊ |Xi|−k ⌋ as follows: 2 Xi) —k E[8 piss! 5 ' 1 s oo 2 That completes the proof of Equation (8.6) and, therefore, the theorem. ⊓⊔ # 8.4 Closing Remarks
2401.09350#312
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
313
That completes the proof of Equation (8.6) and, therefore, the theorem. ⊓⊔ # 8.4 Closing Remarks The algorithms in this chapter were unique in two ways. First, they directly took on the challenging problem of MIPS. This is in contrast to earlier chap- ters where MIPS was only an afterthought. Second, there is little to no pre- processing involved in the preparation of the index, which itself is small in size. That is unlike trees, hash buckets, graphs, and clustering that require a generally heavy index that itself is computationally-intensive to build. The approach itself is rather unique as well. It is particularly interesting because the trade-off between efficiency and accuracy can be adjusted during retrieval. That is not the case with trees, LSH, or graphs, where the con- struction of the index itself heavily influences that balance. With sampling methods, it is at least theoretically possible to adapt the retrieval strategy to the hardness of the query distribution. That question remains unexplored.
2401.09350#313
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
314
Another area that would benefit from further research is the sampling strategy itself. In particular, in the BoundedME algorithm, the dimensions that are sampled next are drawn randomly. While that simplifies analysis— which follows the analysis of popular Bandit algorithms—it is not hard to argue that the strategy is sub-optimal. After all, unlike the Bandit setup, where reward distributions are unknown and samples from the reward dis- tributions are revealed only gradually, here we have direct access to all data points a priori. Whether and how adapting the sampling strategy to the underlying data or query distribution may improve the error bounds or the accuracy or efficiency of the algorithm in practice remains to be studied. 123 124 8 Sampling Algorithms # References G. Ballard, T. G. Kolda, A. Pinar, and C. Seshadhri. Diamond sampling for approximate maximum all-pairs dot-product (mad) search. In 2015 IEEE International Conference on Data Mining, pages 11–20, 2015. R. Bardenet and O.-A. Maillard. Concentration inequalities for sampling without replacement. Bernoulli, 21(3):1361–1385, 2015. E. Cohen and D. D. Lewis. Approximating matrix multiplication for pat- tern recognition tasks. In Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 682–691, 1997.
2401.09350#314
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]
2401.09350
315
Q. Ding, H.-F. Yu, and C.-J. Hsieh. A fast sampling algorithm for maxi- mum inner product search. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pages 3004–3012, 4 2019. T. Lattimore and C. Szepesv´ari. Bandit Algorithms. Cambridge University Press, 2020. R. Liu, T. Wu, and B. Mozafari. A bandit approach to maximum inner product search. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019. S. S. Lorenzen and N. Pham. Revisiting wedge sampling for budgeted maxi- mum inner product search. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, pages 4789–4793, 8 2021. A. J. Walker. An efficient method for generating discrete random variables with general distributions. ACM Transactions on Mathematical Software, 3(3):253–256, 9 1977. Part III Compression Chapter 9 Quantization
2401.09350#315
Foundations of Vector Retrieval
Vectors are universal mathematical objects that can represent text, images, speech, or a mix of these data modalities. That happens regardless of whether data is represented by hand-crafted features or learnt embeddings. Collect a large enough quantity of such vectors and the question of retrieval becomes urgently relevant: Finding vectors that are more similar to a query vector. This monograph is concerned with the question above and covers fundamental concepts along with advanced data structures and algorithms for vector retrieval. In doing so, it recaps this fascinating topic and lowers barriers of entry into this rich area of research.
http://arxiv.org/pdf/2401.09350
Sebastian Bruch
cs.DS, cs.IR
null
null
cs.DS
20240117
20240117
[]