Does Sketching Work?

Community Article Published November 20, 2023

This post is replicated from my personal website. Thanks to lunarflu for inviting me to share it here!


I'm excited to share that my paper, Fast and forward stable randomized algorithms for linear least-squares problems has been released as a preprint on arXiv.

With the release of this paper, now seemed like a great time to discuss a topic I’ve been wanting to write about for a while: sketching. For the past two decades, sketching has become a widely used algorithmic tool in matrix computations. Despite this long history, questions still seem to be lingering about whether sketching really works:

In this post, I want to take a critical look at the question "does sketching work"? Answering this question requires answering two basic questions:

  1. What is sketching?
  2. What would it mean for sketching to work?

I think a large part of the disagreement over the efficacy of sketching boils down to different answers to these questions. By considering different possible answers to these questions, I hope to provide a balanced perspective on the utility of sketching as an algorithmic primitive for solving linear algebra problems.

Sketching

In matrix computations, sketching is really a synonym for (linear) dimensionality reduction. Suppose we are solving a problem involving one or more high-dimensional vectors bRnb \in \mathbb{R}^n or perhaps a tall matrix ARn×kA\in \mathbb{R}^{n\times k}. A sketching matrix is a d×nd\times n matrix SRd×nS \in \mathbb{R}^{d\times n} where dnd \ll n. When multiplied into a high-dimensional vector bb or tall matrix AA, the sketching matrix SS produces compressed or "sketched" versions SbSb and SASA that are much smaller than the original vector bb and matrix AA.

image/png

Let E={x1,,xp}\mathsf{E}=\{x_1,\ldots,x_p\} be a collection of vectors. For SS to be a "good" sketching matrix for E\mathsf{E}, we require that SS preserves the lengths of every vector in E\mathsf{E} up to a distortion parameter ε>0\varepsilon>0:

(1ε)xSx(1+ε)x(1)(1-\varepsilon) \|x\|\le\|Sx\|\le(1+\varepsilon)\|x\| \tag{1}

for every xx in E\mathsf{E}.

For linear algebra problems, we often want to sketch a matrix AA. In this case, the appropriate set E\mathsf{E} that we want our sketch to be "good" for is the column space of the matrix AA, defined to be

col(A){Ax:xRk}.\operatorname{col}(A) \coloneqq \{ Ax : x \in \mathbb{R}^k \}.

Remarkably, there exist many sketching matrices that achieve distortion ε\varepsilon for E=col(A)\mathsf{E}=\operatorname{col}(A) with an output dimension of roughly dk/ε2d \approx k / \varepsilon^2. In particular, the sketching dimension dd is proportional to the number of columns kk of AA. This is pretty neat! We can design a single sketching matrix SS which preserves the lengths of all infinitely-many vectors AxAx in the column space of AA.

Sketching Matrices

There are many types of sketching matrices, each with different benefits and drawbacks. Many sketching matrices are based on randomized constructions in the sense that entries of SS are chosen to be random numbers. Broadly, sketching matrices can be classified into two types:

  • Data-dependent sketches. The sketching matrix SS is constructed to work for a specific set of input vectors E\mathsf{E}.
  • Oblivious sketches. The sketching matrix SS is designed to work for an arbitrary set of input vectors E\mathsf{E} of a given size (i.e., E\mathsf{E} has pp elements) or dimension (i.e., E\mathsf{E} is a kk-dimensional linear subspace).

We will only discuss oblivious sketching for this post. We will look at three types of sketching matrices: Gaussian embeddings, subsampled randomized trignometric transforms, and sparse sign embeddings.

The details of how these sketching matrices are built and their strengths and weaknesses can be a little bit technical. All three constructions are independent from the rest of this article and can be skipped on a first reading. The main point is that good sketching matrices exist and are fast to apply: Reducing bRnb\in\mathbb{R}^n to SbRdSb\in\mathbb{R}^{d} requires roughly O(nlogn)\mathcal{O}(n\log n) operations, rather than the O(dn)\mathcal{O}(dn) operations we would expect to multiply a d×nd\times n matrix and a vector of length nn. Here, O()\mathcal{O}(\cdot) is big O notation.

Gaussian Embeddings

The simplest type of sketching matrix SRd×nS\in\mathbb{R}^{d\times n} is obtained by (independently) setting every entry of SS to be a Gaussian random number with mean zero and variance 1/d1/d. Such a sketching matrix is called a Gaussian embedding.Here, embedding is a synonym for sketching matrix.

Benefits. Gaussian embeddings are simple to code up, requiring only a standard matrix product to apply to a vector SbSb or matrix SASA. Gaussian embeddings admit a clean theoretical analysis, and their mathematical properties are well-understood.

Drawbacks. Computing SbSb for a Gaussian embedding costs O(dn)\mathcal{O}(dn) operations, significantly slower than the other sketching matrices we will consider below. Additionally, generating and storing a Gaussian embedding can be computationally expensive.

Subsampled Randomized Trigonometric Transforms

The subsampled randomized trigonometric transform (SRTT) sketching matrix takes a more complicated form. The sketching matrix is defined to be a scaled product of three matrices

S=ndRFD. S = \sqrt{\frac{n}{d}} \cdot R \cdot F \cdot D.

These matrices have the following definitions:

  • DRn×nD\in\mathbb{R}^{n\times n} is a diagonal matrix whose entries are each a random ±1\pm 1 (chosen independently with equal probability).
  • FRn×nF\in\mathbb{R}^{n\times n} is a fast trigonometric transform such as a fast discrete cosine transform.One can also use the ordinary fast Fourier transform, but this results in a complex-valued sketch.
  • RRd×nR\in\mathbb{R}^{d\times n} is a selection matrix. To generate RR, let i1,,idi_1,\ldots,i_d be a random subset of 1,,n\\{1,\ldots,n\\}, selected without replacement. RR is defined to be a matrix for which Rb=(bi1,,bid)Rb = (b_{i_1},\ldots,b_{i_d}) for every vector bb.

To store SS on a computer, it is sufficient to store the nn diagonal entries of DD and the dd selected coordinates i1,,idi_1,\ldots,i_d defining RR. Multiplication of SS against a vector bb should be carried out by applying each of the matrices RR, FF, and DD in sequence, such as in the following MATLAB code:

% Generate randomness for S
signs = 2*randi(2,m,1)-3; % diagonal entries of D
idx = randsample(m,d); % indices i_1,...,i_d defining R

% Multiply S against b
c = signs .* b % multiply by D
c = dct(c) % multiply by F
c = c(idx) % multiply by R
c = sqrt(n/d) * c % scale

Benefits. SS can be applied to a vector bb in O(nlogn)\mathcal{O}(n \log n) operations, a significant improvement over the O(dn)\mathcal{O}(dn) cost of a Gaussian embedding. The SRTT has the lowest memory and random number generation requirements of any of the three sketches we discuss in this post.

Drawbacks. Applying SS to a vector requires a good implementation of a fast trigonometric transform. Even with a high-quality trig transform, SRTTs can be significantly slower than sparse sign embeddings (defined below). For an example, see Figure 2 in this paper. SRTTs are hard to parallelize. Block SRTTs are more parallelizable, however. In theory, the sketching dimension should be chosen to be d(klogk)/ε2d \approx (k\log k)/\varepsilon^2, larger than for a Gaussian sketch.

Sparse Sign Embeddings

A sparse sign embedding takes the form

S=1ζ[s1s2sn].S = \frac{1}{\sqrt{\zeta}} \begin{bmatrix} s_1 & s_2 & \cdots & s_n \end{bmatrix}.

Here, each column sis_i is an independently generated random vector with exactly ζ\zeta nonzero entries with random ±1\pm 1 values in uniformly random positions. The result is a d×nd\times n matrix with only ζn\zeta \cdot n nonzero entries. The parameter ζ\zeta is often set to a small constant like 88 in practice.This recommendation comes from the following paper, and I've used this parameter setting successfully in my own work.

Benefits. By using a dedicated sparse matrix library, SS can be very fast to apply to a vector bb (either O(n)\mathcal{O}(n) or O(nlogk)\mathcal{O}(n\log k) operations) to apply to a vector, depending on parameter choices (see below). With a good sparse matrix library, sparse sign embeddings are often the fastest sketching matrix by a wide margin.

Drawbacks. To be fast, sparse sign embeddings requires a good sparse matrix library. They require generating and storing roughly ζn\zeta n random numbers, higher than SRTTs (roughly nn numbers) but much less than Gaussian embeddings (exactly dndn numbers). In theory, the sketching dimension should be chosen to be d(klogk)/ε2d \approx (k\log k)/\varepsilon^2 and the sparsity should be set to ζ(logk)/ε\zeta \approx (\log k)/\varepsilon; the theoretically sanctioned sketching dimension (at least according to existing theory) is larger than for a Gaussian sketch. In practice, we can often get away with using dk/ε2d \approx k/\varepsilon^2 and ζ=8\zeta=8.

Summary

Using either SRTTs or sparse maps, a sketching a vector of length nn down to dd dimensions requires only O(n)\mathcal{O}(n) to O(nlogn)\mathcal{O}(n\log n) operations. To apply a sketch to an entire n×kn\times k matrix AA thus requires roughly O(nk)\mathcal{O}(nk) operations. Therefore, sketching offers the promise of speeding up linear algebraic computations involving AA, which typically take O(nk2)\mathcal{O}(nk^2) operations.

How Can You Use Sketching?

The simplest way to use sketching is to first apply the sketch to dimensionality-reduce all of your data and then apply a standard algorithm to solve the problem using the reduced data. This approach to using sketching is called sketch-and-solve.

As an example, let’s apply sketch-and-solve to the least-squares problem:

minimizexRkAxb.(2)\operatorname*{minimize}_{x\in\mathbb{R}^k} \|Ax - b\|. \tag{2}

We assume this problem is highly overdetermined with AA having many more rows nn than columns mm.

To solve this problem with sketch-and-solve, generate a good sketching matrix SS for the set E=col([Ab])\mathsf{E} = \operatorname{col}(\begin{bmatrix} A & b\end{bmatrix}). Applying SS to our data AA and bb, we get a dimensionality-reduced least-squares problem

minimizex^Rk(SA)x^Sb.(3)\operatorname*{minimize}_{\hat{x}\in\mathbb{R}^k} \|(SA)\hat{x} - Sb\|. \tag{3}

The solution x^\hat{x} is the sketch-and-solve solution to the least-squares problem, which we can use as an approximate solution to the original least-squares problem.

Least-squares is just one example of the sketch-and-solve paradigm. We can also use sketching to accelerate other algorithms. For instance, we could apply sketch-and-solve to clustering. To cluster data points x1,,xpx_1,\ldots,x_p, first apply sketching to obtain Sx1,,SxpSx_1,\ldots,Sx_p and then apply an out-of-the-box clustering algorithms like k-means to the sketched data points.

Does Sketching Work?

Most often, when sketching critics say "sketching doesn't work", what they mean is "sketch-and-solve doesn't work".

To address this question in a more concrete setting, let's go back to the least-squares problem (2). Let xx_\star denote the optimal least-squares solution and let x^\hat{x} be the sketch-and-solve solution (3). Then, using the distortion condition (1), one can show that

Ax^b1+ε1εAxb.\|A\hat{x} - b\| \le \frac{1+\varepsilon}{1-\varepsilon} \|Ax - b\|.

If we use a sketching matrix with a distortion of ε=1/3\varepsilon = 1/3, then this bound tells us that

Ax^b2Axb.(4)\|A\hat{x} - b\| \le 2\|Ax_\star - b\|. \tag{4}

Is this a good result or a bad result? Ultimately, it depends. In some applications, the quality of a putative least-squares solution x^\hat{x} is can be assessed from the residual norm Ax^b\|A\hat{x} - b\|. For such applications, the bound (4) ensures that Ax^b\|A\hat{x} - b\| is at most twice Axb\|Ax_\star-b\|. Often, this means x^\hat{x} is a pretty decent approximate solution to the least-squares problem.

For other problems, the appropriate measure of accuracy is the so-called forward error x^x\|\hat{x} - x_\star\|, measuring how close x^\hat{x} is to xx_\star. For these cases, it is possible that x^x\|\hat{x} - x_\star\| might be large even though the residuals are comparable (4).

Let's see an example, using the MATLAB code from my paper:

[A, b, x, r] = random_ls_problem(1e4, 1e2, 1e8, 1e-4); % Random LS problem
S = sparsesign(4e2, 1e4, 8); % Sparse sign embedding
sketch_and_solve = (S*A) \ (S*b); % Sketch-and-solve
direct = A \ b; % MATLAB mldivide

Here, we generate a random least-squares problem of size 10,000 by 100 (with condition number 10810^8 and residual norm 10410^{-4}). Then, we generate a sparse sign embedding of dimension d=400d = 400 (corresponding to a distortion of roughly ε1/2\varepsilon \approx 1/2). Then, we compute the sketch-and-solve solution and, as reference, a "direct" solution by MATLAB's \.

We compare the quality of the sketch-and-solve solution to the direct solution, using both the residual and forward error:

fprintf('Residuals: sketch-and-solve %.2e, direct %.2e, optimal %.2e\n',...
           norm(b-A*sketch_and_solve), norm(b-A*direct), norm(r))
fprintf('Forward errors: sketch-and-solve %.2e, direct %.2e\n',...
           norm(x-sketch_and_solve), norm(x-direct))

Here's the output:

Residuals: sketch-and-solve 1.13e-04, direct 1.00e-04, optimal 1.00e-04
Forward errors: sketch-and-solve 1.06e+03, direct 8.08e-07

The sketch-and-solve solution has a residual norm of 1.13×1041.13\times 10^{-4}, close to direct method's residual norm of 1.00×1041.00\times 10^{-4}. However, the forward error of sketch-and-solve is 1×1031\times 10^3 nine orders of magnitude larger than the direct method's forward error of 8×1078\times 10^{-7}.

Does sketch-and-solve work? Ultimately, it's a question of what kind of accuracy you need for your application. If a small-enough residual is all that's needed, then sketch-and-solve is perfectly adequate. If small forward error is needed, sketch-and-solve can be quite bad.

One way sketch-and-solve can be improved is by increasing the sketching dimension dd and lowering the distortion ε\varepsilon. Unfortunately, improving the distortion of the sketch is expensive. Because of the relation dk/ε2d \approx k /\varepsilon^2, to decrease the distortion by a factor of ten requires increasing the sketching dimension dd by a factor of one hundred! Thus, sketch-and-solve is really only appropriate when a low degree of distortion ε\varepsilon is necessary.

Iterative Sketching: Combining Sketching with Iteration

Sketch-and-solve is a fast way to get a low-accuracy solution to a least-squares problem. But it's not the only way to use sketching for least-squares. One can also use sketching to obtain high-accuracy solutions by combining sketching with an iterative method.

There are many iterative methods for least-square problems. Iterative methods generate a sequence of approximate solutions x1,x2,x_1,x_2,\ldots that we hope will converge at a rapid rate to the true least-squares solution, xx_\star.

To using sketching to solve least-squares problems iteratively, we can use the following observation:

If SS is a sketching matrix for E=col(A)\mathsf{E} = \operatorname{col}(A), then (SA)SAAA(SA)^\top SA \approx A^\top A.

Therefore, if we compute a QR factorization

SA=QR,SA = QR,

then

AA(SA)(SA)=RQQR=RR.A^\top A \approx (SA)^\top (SA) = R^\top Q^\top Q R = R^\top R.

Notice that we used the fact that QQ=IQ^\top Q = I since QQ has orthonormal columns. The conclusion is that RRAAR^\top R \approx A^\top A.

Let’s use the approximation RRAAR^\top R \approx A^\top A to solve the least-squares problem iteratively. Start off with the normal equations [Footnote 1]

(AA)x=Ab.(5)(A^\top A)x = A^\top b. \tag{5}

We can obtain an approximate solution to the least-squares problem by replacing AAA^\top A by RRR^\top R in (5) and solving. The resulting solution is

x0=R1(R(Ab)).x_0 = R^{-1} (R^{-\top}(A^\top b)).

This solution x0x_0 will typically not be a good solution to the least-squares problem (2), so we need to iterate. To do so, we’ll try and solve for the error xx0x - x_0. To derive an equation for the error, subtract AAx0A^\top A x_0 from both sides of the normal equations (5), yielding

(AA)(xx0)=A(bAx0).(A^\top A)(x-x_0) = A^\top (b-Ax_0).

Now, to solve for the error, substitute RRR^\top R for AAA^\top A again and solve for xx, obtaining a new approximate solution x1x_1:

xx1x0+R(R1(A(bAx0))).x\approx x_1 \coloneqq x_0 + R^{-\top}(R^{-1}(A^\top(b-Ax_0))).

We can now go another step: Derive an equation for the error xx1x-x_1, approximate AARRA^\top A \approx R^\top R, and obtain a new approximate solution x2x_2. Continuing this process, we obtain an iteration

xi+1=xi+R(R1(A(bAxi))).(6)x_{i+1} = x_i + R^{-\top}(R^{-1}(A^\top(b-Ax_i))).\tag{6}

This iteration is known as the iterative sketching method. [Footnote 2]

Let's apply iterative sketching to the example we considered above. We show the forward error of the sketch-and-solve and direct methods as horizontal dashed purple and red lines. Iterative sketching begins at roughly the forward error of sketch-and-solve, with the error decreasing at an exponential rate until it reaches that of the direct method over the course of fourteen iterations. For this problem, at least, iterative sketching gives high-accuracy solutions to the least-squares problem!

image/png

To summarize, we've now seen two very different ways of using sketching:

  • Sketch-and-solve. Sketch the data AA and bb and solve the sketched least-squares problem (3). The resulting solution x^\hat{x} is cheap to obtain, but may have low accuracy.
  • Iterative sketching. Sketch the matrix AA and obtain an approximation RR=(SA)(SA)R^\top R = (SA)^\top (SA) to AAA^\top A. Use the approximation RRR^\top R to produce a sequence of better-and-better least-squares solutions xix_i by the iteration (6). If we run for enough iterations qq, the accuracy of the iterative sketching solution xqx_q can be quite high.

By combining sketching with more sophisticated iterative methods such as conjugate gradient and LSQR, we can get an even faster-converging least-squares algorithm, known as sketch-and-precondition. Here's the same plot from above with sketch-and-precondition added; we see that sketch-and-precondition converges even faster than iterative sketching does!

image/png

"Does sketching work?" Even for a simple problem like least-squares, the answer is complicated:

A direct use of sketching (i.e., sketch-and-solve) leads to a fast, low-accuracy solution to least-squares problems. But sketching can achieve much higher accuracy for least-squares problems by combining sketching with an iterative method (iterative sketching and sketch-and-precondition).

We've focused on least-squares problems in this section, but these conclusions could hold more generally. If "sketching doesn't work" in your application, maybe it would if it was combined with an iterative method.

Just How Accurate Can Sketching Be?

We left our discussion of sketching-plus-iterative-methods in the previous section on a positive note, but there is one last lingering question that remains to be answered. We stated that iterative sketching (and sketch-and-precondition) converge at an exponential rate. But our computers store numbers to only so much precision; in practice, the accuracy of an iterative method has to saturate at some point.

An (iterative) least-squares solver is said to be forward stable if, when run for a sufficient number qq of iterations, the final accuracy xqx\|x_q - x_\star\| is comparable to accuracy of a standard direct method for the least-squares problem like MATLAB's \ command or Python's scipy.linalg.lstsq. Forward stability is not about speed or rate of convergence but about the maximum achievable accuracy.

The stability of sketch-and-precondition was studied in a recent paper by Meier, Nakatsukasa, Townsend, and Webb. They demonstrated that, with the initial iterate x0=0x_0 = 0, sketch-and-precondition is not forward stable. The maximum achievable accuracy was worse than standard solvers by orders of magnitude! Maybe sketching doesn't work after all?

Fortunately, there is good news:

  • The iterative sketching method is provably forward stable. This result is shown in my newly released paper; check it out if you're interested!
  • If we use the sketch-and-solve method as the initial iterate x0=x^x_0 = \hat{x} for sketch-and-precondition, then sketch-and-precondition appears to be forward stable in practice. No theoretical analysis supporting this finding is known at present.For those interested, neither iterative sketching nor sketch-and-precondition are backward stable, which is a stronger stability guarantee than forward stability. Fortunately, forward stability is a perfectly adequate stability guarantee for many—but not all—applications.

These conclusions are pretty nuanced. To see what's going, it can be helpful to look at a graph:For another randomly generated least-squares problem of the same size with condition number 101010^{10} and residual 10610^{-6}.

image/png

The performance of different methods can be summarized as follows: Sketch-and-solve can have very poor forward error. Sketch-and-precondition with the zero initialization x0=0x_0 = 0 is better, but still much worse than the direct method. Iterative sketching and sketch-and-precondition with x0=x^x_0 = \hat{x} fair much better, eventually achieving an accuracy comparable to the direct method.

Put more simply, appropriately implemented, sketching works after all!

Conclusion

Sketching is a computational tool, just like the fast Fourier transform or the randomized SVD. Sketching can be used effectively to solve some problems. But, like any computational tool, sketching is not a silver bullet. Sketching allows you to dimensionality-reduce matrices and vectors, but it distorts them by an appreciable amount. Whether or not this distortion is something you can live with depends on your problem (how much accuracy do you need?) and how you use the sketch (sketch-and-solve or with an iterative method).


Footnotes

  1. As I've described in a previous post, it's generally inadvisable to solve least-squares problems using the normal equations. Here, we're just using the normal equations as a conceptual tool to derive an algorithm for solving the least-squares problem.
  2. The name iterative sketching is for historical reasons. Original versions of the procedure involved taking a fresh sketch SiA=QiRiS_iA = Q_iR_i at every iteration. Later, it was realized that a single sketch SASA suffices, albeit with a slower convergence rate. Typically, only having to sketch and QR factorize once is worth the slower convergence rate. If the distortion is small enough, this method converges at an exponential rate, yielding a high-accuracy least squares solution after a few iterations.