%!TEX root = thesis.tex
%\singlespace
%\bnote{turn off single space}

In today's online world, personal information is distributed among many services.  Private details such as health records, bank accounts, and private relationships are stored online.  A service storing sensitive details should authenticate a user's identity before granting access to resources.  The standard mechanism for authenticating identity is a password  shared between a user and the service.

Passwords are easy to deploy, update, and revoke.  However, passwords have a significant weakness.  Ideally, passwords would consist of random characters to make password guessing infeasible, however, there is a strong tradeoff between password strength and memorability~\cite{yan2004password,weir2010testing}.  
Large-scale system compromises have revealed large files of hashed password, allowing attackers to perform brute-force guessing attacks against passwords~\cite{passwordProject}.  There is strong evidence that the average user's password can be guessed by a determined attacker~\cite{weir2010testing}.  The \emph{entropy}~(uncertainty) of authentication information is critical. 

There are two natural alternatives to passwords, something the user \emph{has} or something the user \emph{is}~\cite{kim2011method}.
We collectively refer to one of these alternatives as a source.  While many sources have higher entropy than passwords, they present a new problem.  Sources instantiated from physical phenomena are often \emph{noisy}~\cite{daugman2004,monrose2002password,pappu2002physical,tuyls2006puf}.
  That is, repeated readings from the same physical source are close (according to some distance metric) but not identical.  
  
The classic way to use a noisy source for authentication is to take an initial reading of the source and store this reading as a template.  Then subsequent readings are accepted if they are close enough.  This approach has two significant weaknesses: 1) the original template can be stolen and used for enrollment~\cite{galbally2012iriscode}, and 2) there is a binary decision on access that can be manually forced to output accept~\cite{ratha2003biometrics}.  An alternative is to directly derive keys from noisy sources.  However, when trying to derive a stable and consistent key, noise becomes a substantial problem.

Dodis, Ostrovsky, Reyzin, and Smith~\cite{DBLP:journals/siamcomp/DodisORS08} designed fuzzy extractors to derive keys from noisy sources.  Let $w$ represent an initial reading of the source and $w'$ a nearby reading.  A fuzzy extractor consists of two algorithms. Generate~(\gen) takes $w$ as an input, and produces $\key$ and some  helper information $p$.  The second algorithm Reproduce~(\rep) takes a nearby reading $w'$ and the helper information, $p$.  We assume that the distance between $w$ and $w'$ is at most $t$.  \rep and \gen should produce the same $\key$ if $\dis(w, w')\le t$.  History has shown that stored authentication information is often compromised, so  $\key$ should be cryptographically strong even if an attacker knows the helper data $p$.  

Bennett, Brassard, and Robert identified two crucial tasks for deriving keys from noisy data~\cite{bennett1988privacy}.\footnote{Bennett, Brassard, and Robert consider an interactive version of the problem.  We discuss their setting in \secref{sec:interactive setting}.} The first, information-reconciliation removes errors from $w'$.  The second, privacy amplification converts $w$ to a uniform value.  Traditionally, a fuzzy extractor uses two separate algorithms to accomplish these tasks.  A secure sketch~\cite{DBLP:journals/siamcomp/DodisORS08} performs information-reconciliation and a randomness extractor~\cite{nisan1993randomness} performs information-reconciliation.  We call a fuzzy extractor that separates information-reconciliation and privacy amplification the \emph{sketch-and-extract} construction.  In this work, we concentrate on fuzzy extractors and secure sketches.\footnote{Randomness extractors have matching upper and lower bounds on the security loss: for every extra two bits of output key, they lose one bit of security.}  A secure sketch consists of two algorithms: $\sketch$ takes $w$ and produces a public value $ss$, and $\rec$ takes a nearby $w'$ and $ss$ to recover $w$.  The goal of fuzzy extractors is to ensure that $w$ has high entropy conditioned on $ss$.

\paragraph{Limitations of Standard Techniques}
Fuzzy extractors and secure sketches must contain some information about the initial reading $w$ in order to accept nearby $w'$.  We call a point $w'$ accepting if it is within distance $t$ of the original reading $w$.  A larger $t$ means that more $w'$ are accepting.  For a fuzzy extractor, the adversary can use $\rep$ on accepting $w'$ to produce $\key$. (For a secure sketch, the adversary can use $\rec$ on accepting $w'$ to obtain $w$.)  This means if an adversary can find an accepting $w'$ with noticeable probability, they can learn $\key$ with noticeable probability and break security.  Key derivation becomes more difficult as more points are accepting.  This creates a tension between the length of $\key$ and the error tolerance $t$.  

This tension is quite strong for secure sketches.
Secure sketches are closely linked to error correcting codes.\footnote{We provide a limited introduction to error-correcting codes in \secref{sec:error correcting codes}.}  The syndrome of a linear code is used to compute where errors occurred in transmission.   The syndrome can also serve as a secure sketch (\consref{cons:syndrome}).  The entropy of $w$ conditioned on this secure sketch is at least the starting entropy minus the length of the syndrome.  Standard analysis assumes this is the remaining entropy in $w$.
The length of a syndrome increases as the error tolerance $t$ increases.  This means the lower bound on the remaining entropy of $w$ decreases as $t$ increases.\footnote{If a perfect error-correcting code is used, there are distributions with a matching upper bound on the remaining entropy.  That is, there are distributions where the standard analysis is tight.}  
 
 This is not a limitation of this particular secure sketch.  Dodis et al.~show that secure sketches are linked to the best error-correcting code containing points of $w$~\cite[Appendix C]{DBLP:journals/siamcomp/DodisORS08}.  Upper bounds on the size of error-correcting codes translate to lower bounds on entropy loss of secure sketches.  Error-correcting codes have a long and rich history, with many bounds on the best codes.  All of these bounds are translate into limitations on the best secure sketches.  Most fuzzy extractors use secure sketches (we discuss some exceptions in \secref{ssec:info theory techniques}) for error correction.  Fuzzy extractors that use secure sketches also inherit these bounds.

%As stated above, constructing a secure sketch is equivalent to finding a code that corrects $t$ errors~(that contains points of $w$).
Standard fuzzy extractors and secure sketches only take as parameters the entropy of $w$ and the number of errors to be corrected.  However, secure sketches are connected to the best code containing \emph{points of $w$}.  This varies widely for different sources $w$.  If all points of $w$ are far apart, then correcting $t$ errors is easy.  Without information about the distribution of $w$, standard secure sketches must work for all distributions of entropy $m$ and errors $t$.  The worst case distribution has all points close together.  For this distribution, to recover the correct $w'$ from an original reading $w$, the value $ss$ must disambiguate which point $w$ within distance $t$ was seen.  For this distribution, a secure sketch must decrease entropy by the logarithm of the number of points within distance $t$ of any $w'$.  Thus, if a secure sketch provides a guarantee for the worst distribution, the bound on entropy loss is proportional to this quantity (in most settings, the number of points within distance $t$ is exponential in $t$).

Losses due to secure sketches~(and the resulting fuzzy extractors), prevent key derivation from many practical sources.  For many sources, there are no known fuzzy extractors that provide meaningful security.  
As an example, the human iris is thought to be the strongest biometric~\cite{daugman2004} and current fuzzy extractors provide no guarantee about the strength of a key derived from the human iris~\cite[Section 5]{blanton2009biometric}.  


\section{Overview of Contributions}
In this dissertation, we improve key derivation from noisy sources.  Our improvements derive from three lessons about how to construct fuzzy extractors.  We organize this dissertation around these lessons.  We list the lessons below.  Under each lesson we list our major technical results that serve as supporting evidence.
\begin{itemize}
\item \textbf{Incorporating structure of a noisy distribution} Traditional fuzzy extractors consider the worst-case distribution with entropy $m$ for a desired error-tolerance $t$.  We know more about the structure of physical sources.  It may be possible to avoid the losses of traditional approaches by constructing fuzzy extractors whose security analysis uses structure of a physical source beyond its entropy.  This motivates us to modify the definition of fuzzy-extractors to work for  limited family of distributions rather than all distributions with entropy $m$. We describe a precise measure of a noisy distribution's suitability for key derivation called \emph{fuzzy min-entropy}.  Fuzzy min-entropy is a necessary condition for key derivation (\propref{prop:fuzz necessary}).  
\begin{itemize}
\item \thref{thm:layered hashing}: \emph{Fuzzy min-entropy} is sufficient for security if a distribution is known exactly.  This motivates fuzzy min-entropy as the right measure of a distribution's suitability.  Furthermore, it shows that precise knowledge of a source's distribution allows key derivation.
\item Theorems~\ref{thm:imposs sketch} and \ref{thm:imposs fuzz ext}: Unfortunately, it is imprudent to assume that high entropy distributions are known precisely.  This uncertainty is handled by providing security for a family of distributions.  We show there are families of distributions~(where each distribution has fuzzy min-entropy) that no information-theoretic secure sketch or fuzzy extractor can provide meaningful security for most members of the family.  This shows that uncertainty of a source's distribution comes at a cost to security.
\end{itemize}
\item \textbf{Look beyond sketch-then-extract} Secure sketches are subject to considerably stronger negative results than fuzzy extractors.  We provide additional negative results for computationally secure versions of secure sketches.  We then construct improved fuzzy extractors that do not use secure sketches.
\begin{itemize}
\item \corref{cor:rec yields sketch}: Computational definitions of secure sketches are subject to upper bounds on remaining entropy.
 If computational secure sketches are defined using pseudoentropy, they are subject to almost the same bounds as information-theoretic secure sketches.  
\item \consref{cons:info theoretic}: For many practical sources, reliability demands that the number of tolerated error patterns is greater than the starting entropy of the source.  We call this condition \emph{more errors than entropy}.  Previous approaches have provided no security for such distributions.  One cannot provide security for all such sources, restricted to a limited class of distributions is necessary (discussion in \secref{sec:no sketch}).  We construct the first fuzzy extractor secure for large classes of distributions with more errors than entropy.  This construction does not use a secure sketch.
\end{itemize}

\item \textbf{Leverage Computational Security} Fuzzy extractors were defined with information theoretic security due to the use of information-theoretic tools.  However, there is no compelling need for information-theoretic security.  Fuzzy extractors can be improved by providing computational security.  We provide computational constructions with new features.  All of our constructions are for the Hamming metric (the number of symbols that differ between strings $w$ and $w'$).
\begin{itemize}
\item \consref{cons:informal construction}: A computational fuzzy extractor whose key is as long as the input entropy.
\item \consref{cons:sampling}: A computational fuzzy extractor that allows a source to be securely enrolled across multiple services.  This is known as a reusable fuzzy extractor (see \defref{def:outsider fuzz ext}).\footnote{The work of \cite{Boyen2004} contains some limited positive results on reusable fuzzy extractors.  We discuss these in \secref{sec:how to get reusable}.}
\item \consref{cons:first construction}: A computational fuzzy extractor that improves on the class of sources and error tolerance of the previous construction.  These improvements come at a cost of a large symbol size in $w$.
\end{itemize}
\end{itemize}


\subsection{Organization}
The results in this dissertation are drawn from three works~\cite{fuller2013computational,canetti2014key,fuller2014when}.  
We cover preliminaries and common notation in \chapref{chap:preliminaries}.  We discuss key derivation from noisy sources and fuzzy extractors in \chapref{chap:related work}.  We organize this dissertation by the lessons: incorporating structure of a noisy source~(\secref{sec:measure strength}, technical results in \chapref{chap:info theory}), moving away from sketch-then-extract~(\secref{sec:no sketch}, technical results in \chapref{chap:no sketch}), and providing computational security~(\secref{sec:comp security}, technical results in Chapters~\ref{chap:comp fuzz} and~\ref{chap:more errors than ent}).

\section{Incorporating Structure of a Noisy Distribution}
\label{sec:measure strength}

The goal of this section is to more precisely characterize the quality of a noisy distribution for key derivation.  We begin by introducing a new notion that describes a noisy distribution's suitability for key derivation.  The technical results described in this section can be found in \chapref{chap:info theory}.
%The security requirement for fuzzy extractors is that $\key$ is uniform even to a (computationally unbounded) adversary who has observed $p$.   This requirement is  harder to satisfy as the allowed error tolerance $t$ increases, because it becomes easier for the adversary to guess $\key$ by guessing a $w'$ within distance $t$ of $w$ and running $\rep(w',p)$.


\paragraph{Fuzzy Min-Entropy}
Usually, fuzzy extractors only take as parameters the entropy $m$ of a source and desired error tolerance $t$.  However, this ignores crucial structural information about the distribution $W$.  The number and weight of points contained in neighborhoods of $W$ is crucial for key derivation.
We introduce a new entropy notion that combines entropy and error tolerance into a single measure.  It measures a noisy distribution's suitability for key derivation.  

Consider an adversary that tries to guess values $w'$ close to the original reading $w$~(without considering the helper string).  If an adversary is able to guess some $w'$ within distance $t$ of the original reading $w$, they can subvert the security of $\key$ by running $\rep$.  To have the maximum chance that $w'$ is within distance $t$ of $w$, the adversary would want to maximize the total probability mass of $W$ within the ball $B_t(w')$ of radius $t$ around $w'$.
We  therefore define \emph{fuzzy min-entropy} \[\Hfuzz(W) \eqdef -\log \max_{w'} \Pr[W\in B_t(w')].\]  Observe that this quantity can be bounded in terms of min-entropy: $\Hoo(W) \ge \Hfuzz(W) \ge \Hoo(W)-\log |B_t|$.

Super-logarithmic fuzzy min-entropy  is \emph{necessary} for nontrivial key extraction (\propref{prop:fuzz necessary}). 
However, existing constructions do not measure their security in terms of fuzzy min-entropy; instead, their security is shown to be  $\Hoo(W)$ minus some loss that is at least $\log |B_t|$ due to error-tolerance. Since $\Hoo(W)-\log |B_t| \le \Hfuzz(W)$, it is natural to ask whether this loss is necessary. This question is particularly relevant when the gap between the two sides of the inequality is high.  As an example, iris scans appear to have significant $\Hfuzz(W)$ (because iris scans for different people appear to be well-spread in the metric space~\cite{daugman2006probing}) but negative $\Hoo(W) -\log |B_t|$ \cite[Section 5]{blanton2009biometric}.\footnote{When $\Hoo(W) -\log |B_t|$ is negative we say a source has more errors than entropy.  We discuss this condition further in \secref{sec:no sketch}.}  We therefore ask: \emph{is fuzzy min-entropy sufficient for fuzzy extraction?} There is evidence that it may be when the security requirement is computational rather than information-theoretic---see \secref{sec:related settings}. 



\paragraph{Tight Characterization for the Case of a Known Distribution}
We show that for every source $W$ with super-logarithmic $\Hfuzz(W)$, it is possible to construct a fuzzy extractor with a super-logarithmic length $\key$ (\corref{cor:extension to fuzz ext}). We thus show that $\Hfuzz(W)$ is a necessary and sufficient condition for building a fuzzy extractor for a \emph{known distribution} $W$.  It is important to emphasize that these constructions incorporate the knowledge of the complete distribution of $W$ (and, in particular, they are not polynomial-time).

A number of previous works in this known-distribution setting have provided efficient algorithms and
tight bounds for specific distributions---generally the uniform distribution or
i.i.d. sequences (for example, \cite{JW99,LT03,DBLP:conf/eccv/TuylsG04,hao2006combining,DBLP:journals/corr/abs-1112-5630,IgnatenkoW2012}). 
Our characterization may be seen as unifying previous work, and justifies using $\Hfuzz(W)$ as the measure of the quality of a noisy distribution,  rather than cruder measures such as $\Hoo(W)-\log |B_t|$.


\paragraph{Impossibility of Fuzzy Extractors for Families of Distributions}
Assuming full knowledge of a distribution is often unrealistic. Indeed, high-entropy distributions can never be fully observed directly and must therefore be modeled. It is imprudent to assume that the designer's model of a distribution is completely accurate---the adversary, with greater resources, would likely be able to build a better model. Therefore, fuzzy extractor designs cannot usually be tailored to one  particular source. Existing designs work for a family of sources (for example, all sources of min-entropy at least $m$ with at most $t$ errors). Thus, the design is fixed before the distribution is fully known, and the adversary may know more about the distribution than the designer of the fuzzy extractor.

We show that this extra adversarial knowledge can be devastating  (\thref{thm:imposs fuzz ext}). 
Specifically, we describe a family of distributions $\mathcal{W}$ and show that not even a 2-bit fuzzy extractor can be secure for most distributions in  $\mathcal{W}$.  We emphasize that each distribution $W\in \mathcal{W}$ has super-logarithmic fuzzy min-entropy---in fact, $\Hfuzz(W)=\Hoo(W)$, because all points in $W$ are distance at least $t$ apart. This result shows that distributional uncertainty is a real obstacle to key derivation from noisy sources.  Our proof relies on high dimensionality of $W$ and on perfect correctness of the $\rep$ procedure.

\paragraph{Stronger Results for Secure Sketches}
As described above, fuzzy extractors often use secure sketches to perform information reconciliation~(mapping $w'$ back to $w$).  

We show comparable, but stronger, results for secure sketches.  Namely, we show in \corref{cor:extension to fuzz ext} that secure sketches are possible if the distribution $W$ is precisely known. (In fact, we obtain our fuzzy extractors for the case of a known distribution from this result by applying a randomness extractor.) 

On the other hand, there is a family of sources with super-logarithmic $\Hfuzz(W)=\Hoo(W)$ for which no secure sketch correcting even a few errors is possible (Theorem \ref{thm:imposs sketch}). The impossibility result applies even when $\rec$ is allowed to be incorrect with probability up to $1/4$ (as opposed to our fuzzy extractor impossibility result). 

\subsection{Techniques}
\label{ssec:info theory techniques}

\paragraph{Techniques for Positive Results for Known Distributions} We now explain how to construct a secure sketch for an arbitrary known distribution $W$.  We begin with distributions in which all points in the support have the same probability (so-called ``flat'' distributions).   Consider some subsequent reading $w'$. To achieve correctness, the sketch algorithm must disambiguate which point $w\in W$ within distance $t$ of $w'$ was sketched. Disambiguating multiple points can be accomplished by universal hashing, as long as the size of hash output space is slightly greater than the number of possible points. Thus, our sketch is computed via a universal hash of $w$. To determine the length of that sketch, consider the heaviest (according to $W$) ball of radius $t$. Because the distribution is flat, it is also the ball with the most points of nonzero probability. Thus, the length of the sketch needs to be slightly greater than the logarithm of the number of non-zero probability points in that ball. Since $\Hfuzz(W)$ is determined by the weight of that ball, the number of points cannot be too high and there will be entropy left after the sketch is published.

For an arbitrary distribution, we cannot afford to disambiguate points in the ball with the greatest number of points, because there could be too many low-probability points in a single ball despite a high $\Hfuzz(W)$.  We solve this problem by splitting the arbitrary distribution into a number of nearly flat distributions we call ``levels.''  We then write down, as part of the sketch, the level of the original reading $w$ and apply the above construction considering only points in that level.  We call this construction \emph{leveled hashing}.%: it separates the distribution $W$ into levels that are nearly flat and then hashes to disambiguate the most likely ball at that level.

\paragraph{Techniques for Negative Results for Distribution Families}
We construct a family of distributions $\mathcal{W}$ and prove impossibility for a uniformly random $W\in \mathcal{W}$~(instead of proving impossibility for a worst-case $W$).
We start by observing the following asymmetry: $\gen$  sees only the sample $w$ (obtained via $W\leftarrow \mathcal{W}$ and $w\leftarrow W$), while
the adversary knows $W$.   To exploit the asymmetry, we construct $\mathcal{W}$ so that conditioning on the knowledge of $W$ reduces the distribution to a single affine line, but conditioning on $w$ leaves the rest of the distribution uniform on a large fraction of the entire space.

Then we show how the adversary can exploit the knowledge of the affine line to reduce the uncertainty about $w$ (in the secure sketch case) or $\key$ (in the fuzzy extractor case). 
In the secure sketch case, $ss$ can be used to find fixed points of $\rec(\cdot, ss)$ which, by the correctness requirement of the sketch, must be separated by minimum distance $t$. This means there aren't too many of them, so few can lie on an average line, permitting the adversary to guess one easily.

In the fuzzy extractor case, the nonsecret value $p$ partitions the metric space into regions that produce a consistent value under $\rep$ (preimages of each $\key$ under $\rep(\cdot, p)$).  For each of these regions, the adversary knows that possible $w$ lie $t$-far from the boundary of the region.  However, in the Hamming space, the vast majority of points lie near the boundary (this follows by combining the isoperimetric inequality~\cite{harper1966optimal} showing that the ball has the smallest boundary and Hoeffding's inequality~\cite{hoeffding1963probability} for bounding the volume that is $t$-away from this boundary).  This allows the adversary to rule out so many possible $w$ that, combined with the adversarial knowledge of the affine line, many regions become empty, leaving $\key$ far from uniform.

The result for fuzzy extractors is delicate.  It uses the fact that $p$ partitions the space into nonoverlapping regions, which is implied by perfect correctness.  Extending this result to imperfect correctness seems challenging and is an interesting open problem. It also uses the fact that there are few points far from the boundary of every region, which is implied by the geometry of  the high-dimensional Hamming space.  This fact seems crucial: in contrast, in low-dimensional Euclidean space, which does not have this property, a single fuzzy extractor can work for any distribution with sufficient $\Hfuzz$. (Such a construction would use quantization or tiling, similar to, for example, \cite{CK03,LT03,CZC04,LC06,BDHTV10,VTOSS10}.  Each sample from $W$ would map to the ``tile'' containing it, from which the output key would be extracted. A randomly chosen quantizer would have the property that few samples lie near the boundary, giving almost-perfect correctness; if perfect correctness is desired, we can give up on security for those rare samples and simply use a special value of $p$ to indicate that one of them was the input.)

\subsection{Related Settings}
\label{sec:related settings}
\paragraph{Other settings with close readings:  $\Hfuzz$ is sufficient}
The security definition of fuzzy extractors and secure sketches can be weakened to protect only against computationally bounded adversaries~\cite{fuller2013computational}.   In this computational setting, fuzzy extractors and secure sketches can be constructed for the family of all distributions $W$ with super-logarithmic $\Hfuzz$ by using virtual grey-box obfuscation for all circuits~\cite{BitanskyCKP14}. The construction places into $p$ the obfuscated program for testing proximity to $w$ and outputting the appropriate value if the test passes.  In addition to relying on strong assumptions for security (namely, the existence of semantically-secure multilinear maps), this construction is not of practical efficiency. Note that if this construction is used for a secure sketch, $W$ will remain unpredictable conditioned on $p$, but will not have pseudoentropy (see  \secref{sec:feas comp sec sketch} for details).

Furthermore, the functional definition of fuzzy extractors and secure sketches can be weakened to permit interaction between the party having $w$ and the party having $w'$ (we discuss this setting in \secref{sec:interactive setting}). Such a weakening is useful for secure remote authentication~\cite{Boyen05secureremote}. When both interaction and computational assumptions are allowed, secure two-party computation can produce a key that will be secure whenever the distribution $W$ has fuzzy min-entropy.  The two-party computation protocol needs to be secure without assuming authenticated channels; it can be built under the assumptions that collision-resistant hash functions and enhanced trapdoor permutations exist~\cite{DBLP:journals/joc/BarakCLPR11}.

\paragraph{Correlated rather than close readings}
A different model for the problem of key derivation from noisy sources does not explicitly consider the distance between $w$ and $w'$, but rather views $w$ and $w'$ as samples drawn from a correlated pair of random variables.   This model is considered in multiple works, including~\cite{wyner1975wire,csiszar1978broadcast,ahlswede1993common,maurer1993secret}; recent characterizations of when key derivation is possible in this model include \cite{renner2005simple} and \cite{tyagi2014bound}.  We discuss this model in \secref{sec:correlated variables}.

\section{Looking Beyond Sketch-then-Extract}
\label{sec:no sketch}
Secure sketches have significantly stronger results in the information-theoretic setting than fuzzy extractors.  This is because secure sketches must precisely reproduce the original reading $w$ while fuzzy extractors only need to produce a consistent value.  In this section, we provide additional negative results on secure sketches, describe how to avoid secure sketches, and then describe a fuzzy extractor achieving a condition that has eluded secure sketches.  We describe technical results in \chapref{chap:no sketch}.

\paragraph {Computational secure sketches are also limited}

We ask  whether negative results on secure sketches can be overcome by relaxing the definition to provide computational security~(in \secref{sec:impossCompSecSketch}).  Recall, a secure sketch produces a public value $ss$ used to reconstruct the original reading $w$. The traditional secrecy requirement is that $w$ has high min-entropy conditioned on $ss$.  This allows the fuzzy extractor of~\cite{DBLP:journals/siamcomp/DodisORS08}  to form $\key$ by applying a randomness extractor~\cite{nisan1993randomness} to $w$, because randomness extractors produce random strings from strings with conditional min-entropy. 

The most natural relaxation of the min-entropy requirement of the secure sketch is to require HILL entropy~\cite{DBLP:journals/siamcomp/HastadILL99}~(namely, that the distribution of $w$ conditioned on $ss$ be \emph{indistinguishable} from a high min-entropy distribution).  Under this definition, we could still use a randomness extractor to obtain $\key$ from $w$, because it would yield a pseudorandom key.  Unfortunately, it is unlikely that such a relaxation will yield fruitful results: we prove in Theorem~\ref{thm:impSketchArbitraryW} that the entropy loss of such secure sketches is subject to the same coding bounds as the ones that constrain information-theoretic secure sketches.  


Another possible relaxation is to require that the value $w$ is unpredictable conditioned on $ss$. This definition would also allow the use of a randomness extractor to get a pseudorandom key, although it would have to be a special extractor---one that has  a reconstruction procedure (see \cite[Lemma 6]{DBLP:conf/eurocrypt/HsiaoLR07}).  We show a significantly weaker negative result for unpredictability entropy:  we prove in \thref{thm:imp of unp entropy} that the unpredictability is at most $\log$ the size of the metric space minus $\log$ the volume of the ball of radius $t$.  For nearly uniform sources of $w$ over the Hamming metric, this bound matches the best information-theoretic security sketches.  However, for lower entropy sources this bound is not meaningful.  Indeed, the result of~\cite{BitanskyCKP14} can be seen as constructing unpredictability secure sketches for all distributions with fuzzy min-entropy.

%
%We ask whether it is possible to obtain longer keys by considering
%computational, rather than information theoretic, security.

\paragraph {Constructing Fuzzy Extractors without Sketches}
Our negative results arise because $\rec$ function acts as an error-correcting code for points of indistinguishable distributions.  It is possible to avoid these negative results by outputting a fresh random variable.\footnote{If some efficient algorithm can invert this fresh value and recover $W$, the bounds of \corref{cor:rec yields sketch} and \thref{thm:imp of unp entropy} both apply.  This means that we need to consider constructions that are hard to invert~(either information-theoretically or computationally).}  Such an algorithm is called a fuzzy conductor~\cite{KanukurthiR09}.  Looking ahead, we construct information-theoretic fuzzy conductors, fuzzy conductors that have computational security, and fuzzy extractors with computational security.  Our constructions will exploit the structure of the physical source beyond its entropy.  We now describe a condition on practical sources that necessitates the use of some type of structure of a physical source (beyond its entropy).
\paragraph{More errors than entropy}
Fuzzy extractors and secure sketches have an inherent tension between security and correctness guarantees.
Consider a distribution with starting entropy $m$ and desired error tolerance $t$.
If $t$ is high enough that there are $2^m$ points in a ball of radius $t$, then there exists a distribution of $w$ of min-entropy $m$  \emph{contained entirely in a single ball}.  This distribution has no fuzzy min-entropy and thus cannot be securely used for key derivation.
Thus, if the security guarantee of a given fuzzy extractor holds for \emph{any} source of a given min-entropy $m$ and the correctness guarantees holds for any $t$ errors, then $m$ must   be greater than $\log |B_t|$.\footnote{Fuzzy min-entropy is also a necessary condition for the computational and interactive settings (\propref{prop:fuzz necessary}).  Thus, even in these relaxed settings, to achieve security for all sources of a given entropy $m$ and error level $t$, $m> \log |B_t|$.  This further motivates our first lesson to incorporate the structure of a distribution.}
If a source fails this condition, we will says that it has \emph{more errors than entropy}.  Distributions with more errors than entropy may have fuzzy min-entropy.
\paragraph{A fuzzy extractor for more errors than entropy}
Current techniques for building secure sketches do not work for sources with more errors than entropy, because they lose at least $\log |B_t|$ bits of entropy regardless of the source. The negative results above show these limitations are unlikely to be overcome by secure sketches that retain pseudoentropy.

%Additionally, computational definitions of security suffer from similar problems~\cite[Corollary 3.8, Theorem 3.10]{fuller2013computational}. Thus, we take a different approach and do not attempt to recover $w$.


We provide the first construction of a fuzzy extractor that can be used for large classes of sources that have more errors than entropy (\consref{cons:info theoretic}).  Our construction works for Hamming errors for strings $w$ of length $\gamma$ over some alphabet $\mathcal{Z}$. As argued above, our construction cannot work for all sources of a given entropy; %each construction comes with a constraint on the sources for which it is secure.  
Our construction can correct a constant fraction of errors, but requires that a constant fraction of the symbols contribute fresh entropy, even conditioned on previous symbols~(\defref{def:partial source}). This type of source is a subset of all sources with fuzzy min-entropy.

Our construction reduces the alphabet size by hashing each input symbol (which comes from a large alphabet) into a much smaller set, so that the resulting hash value has lower entropy deficiency.
The intuition behind this approach is that it reduces the size of $B_t$ by reducing the alphabet size, but preserves a sufficient portion of the input entropy.  The resulting string no longer has more errors than entropy.
 We then apply a standard fuzzy extractor to the resulting string.




\section{Moving to Computational Security}
\label{sec:comp security}
In the previous two sections, we showed that fuzzy extractors can be improved by providing security for families of distributions with additional structure~(instead of all distributions with a given entropy) and by giving up on sketch-then-extract.  In this section, we can provide further improvements by providing \emph{computational} instead of information theoretic security.  We construct three fuzzy extractors with computational security with novel properties.  The first construction uses random linear codes, the second and third constructions use point obfuscation.  For this reason, we split their discussion to \chapref{chap:comp fuzz} and \chapref{chap:more errors than ent} respectively.  All construction in this section are for the Hamming metric.  We assume $w$ of length $\gamma$ over some alphabet $\mathcal{Z}$.
\subsection{Minimizing entropy loss}

By considering this computational secrecy requirement, we construct the first fuzzy extractor (\consref{cons:informal construction}), where $\key$ is as long as the entropy of the source $w$. Our construction uses the code-offset construction~\cite{JW99},\cite[Section 5]{DBLP:journals/siamcomp/DodisORS08} used in prior work, but with two crucial differences.  First, $\key$ is not extracted from $w$ like in the sketch-and-extract approach; rather $w$ ``encrypts'' $\key$ in a way that is decryptable with the knowledge of some close $w'$ (this idea is similar to the way the code-offset construction is presented in 
\cite{JW99} as  a ``fuzzy commitment''). 
Second, the code used is a random linear code, which allows us to use the Learning with Errors~(LWE) assumption due to Regev~\cite{regev2005LWE,regevLWEsurvey} and derive a longer $\key$.

Specifically,  we use the result of D\"{o}ttling and M\"{u}ller-Quade~\cite{dottling2012}, which shows the hardness of decoding random linear codes when the error vector comes from the uniform distribution, with each coordinate ranging over a small interval. This allows us to use $w$ as the error vector, assuming it is uniform.  We also use a result of Akavia, Goldwasser, and Vaikuntanathan~\cite{akavia2009},  which says that LWE has many hardcore bits giving us a $\key$.

Because we use a random linear code,  our decoding is limited to reconciling a logarithmic number of differences.  Unfortunately, we  cannot utilize the results that improve the decoding radius through the use of trapdoors (such as \cite{regev2005LWE}), because in a fuzzy extractor, there is no secret storage place for the trapdoor. If improved decoding algorithms are obtained for random linear codes, they will improve error-tolerance of our construction.  Given the hardness of decoding random linear codes~\cite{berlekamp1978}, we do not expect significant improvement in the error-tolerance of our construction for general physical sources.

In \secref{sec:LWE block fixing sources}, we are able to relax the assumption that $w$ comes from the uniform distribution, and instead allow $w$ to come from a symbol-fixing source~\cite{KZ07} (each dimension is either uniform or fixed). This relaxation follows from a result about the hardness of LWE when samples have a fixed~(and adversarially known) error vector, which may be of independent interest~(\thref{thm:blockLWE}). 

\paragraph{Improving Error Tolerance}
\consref{cons:informal construction} only tolerates a logarithmic number of errors.  Most practical sources have substantially more errors.  Subsequent to our construction, Herder et al.~improved the error-tolerance of this construction for physical sources with an additional property~\cite{herder2014trapdoor}.  For some physical sources it is possible to obtain a confidence vector with the subsequent reading $w'$.  This confidence vector indicates how likely each symbol of $w'$ is to contain an error.  This confidence information can greatly the error tolerance of \consref{cons:informal construction} (from logarithmic number to a linear fraction of errors).\footnote{Herder et al. base their construction on the learning parity with noise problem~\cite{blum2003noise}.  Their approach can easily be extended to larger fields and the learning with errors problem.}  Furthermore, Herder et al.~show that a ring oscillator physical unclonable function~\cite{suh2007physical} produces such confidence information.  If confidence information is not available the construction of Herder et al.~reduces to our construction with logarithmic error tolerance.  Finding other physical sources with similar confidence information is an open problem.

\subsection{Adding reusability}
\label{sec:how to get reusable}
A  desirable security property of fuzzy extractors, introduced by Boyen \cite{Boyen2004}, is called reusability. This property is necessary if a user enrolls the same or correlated values multiple times. For example, if the source is a biometric reading, the user may enroll the same biometric with different organizations.  Each of them will get a slightly different enrollment reading $w_i$, and will run $\gen(w_i)$ to get $\key_i$ and a helper value $p_i$. Security for each $\key_i$ should hold even when an adversary is given all the values $p_1, \dots, p_q$ (and, in case some organizations turn out to compromised or adversarial, a stronger security notion requires security for $\key_i$ even in the presence of $\key_j$ for $j\neq i$).  Many traditional fuzzy extractors are not reusable~\cite{Boyen2004,simoens2009privacy,blanton2012non,blanton2013analysis}.  The only previous construction of reusable fuzzy extractors \cite{Boyen2004} requires very particular relationships between $w_i$ values, which are unlikely to hold in any practical source.


\paragraph{A reusable fuzzy extractor against strong correlation}

%We switch to computational security to obtain constructions with additional features. Our second construction provides reusability (against computationally bounded adversaries).  
We construct a computational fuzzy extractor with strong reusability.  Security holds even if the multiple readings $w_i$ used in $\gen$ are \emph{arbitrarily correlated}, as long as each $w_i$ \emph{individually} comes from an allowed distribution.
The construction is secure for distributions where sampling of symbols produces a high entropy output, such as those with $k$-wise independence among symbols for super-logarithmic $k$.  We note that this construction also handles sources with more errors than entropy~(discussed in \secref{sec:no sketch}).
This construction requires that the fraction of errors is sub-constant. We note this construction requires each symbol of the source to contribute fresh entropy.

\paragraph{Approach}
Our reusable construction is based on obfuscated digital lockers~\cite{canetti2008obfuscating}. Digital lockers output a secret value only when given the correct input to ``unlock'' the secret. An obfuscated digital locker does not provide information about the locked value or how to unlock it.  The main idea of the construction is to pick a random $\key$ and lock $\key$ in a digital locker that is unlocked by a random subset of the symbols of $w$. To tolerate errors in the input, this process is repeated several times, so that at least one digital locker can be unlocked using $w'$. We use obfuscation in a way that does not leak partial information; this is crucial to arguing reusability.



\subsection{Allowing correlated symbols}
Our final construction addresses weaknesses in the previous construction.  Construction \ref{cons:first construction} removes the need for fresh entropy in the symbols and allows a constant fraction of symbols of errors, at the cost of requiring a large alphabet size~(super-polynomial in the security parameter).
It is secure if symbols in $w$
each have individual super-logarithmic min-entropy, even if they are arbitrarily correlated. Moreover,
a constant fraction of symbols in $w$ may have little or no entropy, as long as knowledge of their values does not reduce the entropy of the high-entropy symbols too much (see \defref{def:block guessable}).  

\paragraph{Approach}
Our construction that allows correlated symbols tolerates more errors than the second because it uses digital lockers that are unlocked by single symbols of $w$. Since we do not assume that every symbol has high individual entropy, hiding all of $\key$ in every locker then becomes too risky, Instead, we hide a single bit per locker. To tolerate errors, these bits come from an error correcting code. To ensure an  adversary who learns some bits doesn't learn anything useful about $\key$, we don't encode $\key$ in the error-correcting code, but rather extract $\key$ (using an information-theoretic~\cite{nisan1993randomness} or computational~\cite{krawczyk2010cryptographic} extractor) from the decoded string.


\paragraph{The Required Notion of Obfuscation} 
Constructions \ref{cons:sampling} and \ref{cons:first construction} use simulation secure obfuscation of digital lockers, however, we do not require full-fledged virtual black-box obfuscation~\cite{barak2001possibility}. Instead, we rely on the relaxed notion of \emph{virtual grey-box} obfuscation~\cite{bitansky2010strong}. We also require that the obfuscation remains secure even when several digital lockers of correlated points are composed.
Bitanski and Canetti constructed composable digital lockers with virtual grey-box security under particular number-theoretic assumptions~\cite{bitansky2010strong}.  Recent work of Brzuska and Mittelbach shows that if indistinguishability obfuscation exists then it is not possible to build composable virtual black-box digital lockers~\cite{brzuska2014indistinguishability}.  Thus, our use of virtual grey-box obfuscation is crucial.

\paragraph{Connection to General Obfuscation}
As described in \secref{sec:related settings}, fuzzy extractors for all sources with fuzzy min-entropy can be trivially constructed from virtual grey-box obfuscation for all circuits~\cite{BitanskyCKP14}.  The security of their construction is based on the strong assumption of {\em semantically secure graded encodings} \cite{PassTS13}.
The construction is based on multilinear encoding and is highly impractical.  Our constructions use obfuscated digital lockers.  Obfuscated digital lockers are instantiable under significantly weaker assumptions and can be implemented quite efficiently.
Additionally, the known obfuscation for proximity point programs is not known to be composable and therefore does not yield a reusable fuzzy extractor.

