%!TEX root = thesis.tex

As described in the introduction, secure sketches are subject to considerably stronger negative results than fuzzy extractors.  In this chapter, we first show that computational versions of secure sketches are also subject to strong negative results.  We then show how to construct a fuzzy extractor (without using a secure sketch), that supports sources with more errors than entropy.

\section{Impossibility of Computational Secure Sketches}
\label{sec:impossCompSecSketch}
In this section, we consider whether it is possible in build a secure sketch that retains significantly more computational than information-theoretic entropy.  We consider two different notions for computational entropy, and for both of them show that corresponding secure sketches are subject to upper bounds on the residual entropy. In particular, we show how to transform any sketch retaining HILL entropy into an information-theoretic sketch that retains a similar amount of min-entropy.  Thus, it seems that relaxing security of sketches from information-theoretic to computational does not help.

In conjunction with previous results on upper bounds for the information-theoretic entropy of a secure sketch, this motivates us to build fuzzy extractors that do not incorporate secure sketches.

%\bnote{clarify}
%In particular, for the case of the Hamming metric and inputs that have full entropy, our results are as follows.  In  \secref{sec:imp HILL sketch} we show that a sketch that retains HILL entropy implies a sketch that retains nearly the same amount of min-entropy.  In \secref{sec:imp unp sketch}, we show that the computational unpredictability of a sketch is at most $\log |\mathcal{M}| - \log |B_t(\cdot)|$. Dodis et al. \cite[Section 8.2]{DBLP:journals/siamcomp/DodisORS08}  construct sketches with essentially the same information-theoretic security.\footnote{The security in  \cite[Section 8.2]{DBLP:journals/siamcomp/DodisORS08}  is expressed in terms of entropy of the error rate; recall that $\log B_t(\cdot)\approx H_q(t/n)$, where $n$ is the number of symbols, $q$ is the alphabet size, and $H_q$ is the $q$-ary entropy function.}

\subsection{Bounds on Secure Sketches using HILL entropy}
\label{sec:imp HILL sketch}
HILL entropy is a commonly used computational notion of entropy (\defref{def:cond hill}).  Intuitively, HILL entropy is as good as average min-entropy for all computationally-bounded observers.  Thus, redefining secure sketches using HILL entropy is a  natural relaxation of the original information-theoretic definition; in particular, the sketch-and-extract construction in \lemref{lem:fuzzy ext construction} would yield pseudorandom outputs if the secure sketch ensured high HILL entropy.  
We will consider secure sketches that retain relaxed HILL entropy (\defref{def:relaxed hill}). 

\begin{definition}
\label{def:hill secure sketch}
 We say that $(\sketch, \rec)$ is a  \emph{HILL-entropy~$(\mathcal{M}, m, \tilde{m}, t)$ secure sketch} that is $(\epsilon,s_{sec})$-hard with error $\delta$ if it satisfies \defref{def:secure sketch}, with the security requirement replaced by $H^{\hillrlx}_{\epsilon, s_{sec}}(W|\sketch(W))\geq \tilde{m}$. 
 \end{definition}

Unfortunately, we will show below that such a secure sketch implies an error correcting code with approximately $2^{\tilde{m}}$ points that can correct $t$ random errors (see  \cite[Lemma C.1]{DBLP:journals/siamcomp/DodisORS08} for a similar bound on information-theoretic secure sketches). For the Hamming metric, our result essentially matches the bound on information-theoretic secure sketches of \cite[Proposition 8.2]{DBLP:journals/siamcomp/DodisORS08}.  In fact, we show that, for the Hamming metric, HILL-entropy secure sketches imply information-theoretic ones with similar parameters, and, therefore, the HILL relaxation gives no advantage. 

The intuition for building error-correcting codes from HILL-entropy secure sketches is as follows.  In order to have  $H^{\hillrlx}_{\epsilon, s_{sec}}(W|\sketch(W))\ge \tilde{m}$, there must be a distribution $X, Y$ such that $\Hav(X | Y)\geq \tilde{m}$ and $(X, Y)$ is computationally indistinguishable from $(W, \sketch(W))$.  Sample a sketch $s\leftarrow \sketch(W)$. We know that $\sketch$ followed by $\rec$ likely succeeds on $W|s$  (i.e., $\rec (w', s) = w$ with high probability for $w\leftarrow W|s$ and $w'\leftarrow B_t(w)$).  %So, by indistinguishability, it must also succeed on $Y$. 
 Consider the following experiment: 1) sample $y\leftarrow Y$, 2) draw $x\leftarrow X|y$ and 3) $x'\leftarrow B_t(x)$. By indistinguishability, $\rec (x',y) = x$ with high probability.
 This means we can construct a large set $C$ from the support of $X|y$.  $C$ will be an error correcting code and $\rec$ an efficient decoder.  We can then use standard arguments to turn this code into an information theoretic sketch.  

To make this intuition precise, we need an additional technical condition:  sampling a random neighbor of a point is efficient.
\begin{definition}
\label{def:neighborhood samplable}
We say a metric space $(\mathcal{M}, \dis)$ is $(s_{neigh}, t)$-\emph{neighborhood samplable} if there exists a randomized circuit $\neigh$ of size $s_{neigh}$ that for all $t'\leq t$, $\neigh_{t'}(w)$ outputs a random point at distance $t'$ of $w$.  
\end{definition}

We use the definition of a maximal and average error Shannon codes (Definitions \ref{def:shannon-code} and \ref{def:average error code}).  Recall, when we use the term Shannon code, we mean a maximal error Shannon code.  
A sketch that retains $\tilde{m}$-bits of relaxed HILL entropy implies a maximal error Shannon code with nearly $2^{\tilde{m}}$ points.
\begin{theorem}\label{thm:impSketchArbitraryW}
Let $(\mathcal{M}, \dis)$ be a metric space that is $(s_{neigh}, t)$-neighborhood samplable.  Let $(\sketch, \rec)$ be an HILL-entropy $(\mathcal{M}, m, \tilde{m}, t)$-secure sketch that is $(\epsilon, s_{sec})$-secure with error $\delta$.  Let $s_{rec}$ denote the size of the circuit that computes $\rec$.  If $s_{sec}\geq (t(s_{neigh}+s_{rec}))$,  then there exists a value $s$ and a set $\mathcal{C}$ with $|\mathcal{C}|\geq 2^{\tilde{m}-2}$  that is a $(t, 4(\epsilon+t\delta))$-Shannon code with recovery procedure $\rec(\cdot, s)$.
\end{theorem}
\begin{proof}
  Let $W$ be a distribution of min-entropy $m$.  Let $(X, Y)$ be a joint distribution such that $\Hav(X | Y)\geq \tilde{m}$ and
\[ 
\delta^{\mathcal{D}_{s_{sec}}}((W, \sketch(W)), (X, Y))\le \epsilon\, ,
\]  
where  $s_{sec} \geq t(s_{neigh}+s_{rec})$.  One such $(X, Y)$ must exist by the definition of relaxed HILL entropy. 
Define $D$ as:
\begin{enumerate}
\item Input $w\in\mathcal{M}, z \in\{0, 1\}^*, t$.
\item For all $1\leq t'\leq t$: 
\subitem  $w'\leftarrow \neigh_{t'}(w)$.
\subitem If $\rec(w', z) \neq  w$ output $0$.
\item Output $1$.
\end{enumerate}
 By correctness of the sketch $ \Pr[D(W, \sketch(W)) =1]\ge 1-t\delta$.  Since 
\[\delta^D((W, \sketch(W)), (X, Y))\le \epsilon,\] we know $\Pr[D(X, Y) = 1]\ge 1-\epsilon-t\delta$.  Let $X_y$ denote the random variable $X|Y=y$.  By Markov's inequality,  there exists a set $S_Y$ such that $\Pr[Y\in S_Y]\ge 1/2$ and for all $y\in S_Y$, $\Pr[ D(X_y, y) =1]\ge 1- 2(\epsilon + t\delta)$.  

Because $\Hav(X | Y)\geq \tilde{m}$, we know that $\expe_{y\leftarrow Y} \max_x \Pr[X_y=x]\leq 2^{-\tilde{m}}$.  Applying Markov's inequality to the random variable $\max_x \Pr[X_y=x]$, there exists a set $S'_Y$ such that $\Pr[y\in S'_Y]> 1/2$, and for all $y\in S'_Y$, $\Hoo(X_y)\ge \tilde{m}-1$ (we can use the strict version of Markov's inequality here, because the random variable $\max_x \Pr[X_y=x]$ is positive).  Fix one value $y \in S_Y\cap S'_Y$ (which exists because the sum of probabilities of $S_Y$ and $S'_Y$ is greater than 1).  
Thus, for all such that $t', 1\leq t'\leq t$, 
\[ \Pr_{x\leftarrow X_y}[x'\leftarrow \neigh(x, t') \wedge \rec(x',z) = x]\ge  1-2(\epsilon+t\delta).\]  
Thus,  $X_y$ is a $(t, 2(\epsilon+t\delta))$-average error Shannon code with recovery $\rec(\cdot,y)$ and $2^{\tilde{m}-1}$ points.  The statement of the theorem follows by application of \lemref{lem:averageToMaximalError}.  
\end{proof}

\noindent
For the Hamming metric, any Shannon code (as defined in Definition~\ref{def:shannon-code}) can be converted into an information-theoretic secure sketch~(as described in \cite[Section 8.2]{DBLP:journals/siamcomp/DodisORS08} and references therein).  The idea is to use the code offset construction, and convert worst-case errors to random errors by randomizing the order of the symbols of $w$ first, via a randomly chosen  permutation $\pi$  (which  becomes part of the sketch and is applied to $w'$ during $\rec$). The formal statement of this result  can be expressed in the following Lemma (which is implicit in \cite[Section 8.2]{DBLP:journals/siamcomp/DodisORS08}).
\begin{lemma}
\label{lem:shannon to sketch}
For an alphabet $\mathcal{Z}$, let $\mathcal{C}$  be a $(t, \delta)$ Shannon code over $\mathcal{Z}^\gamma$.  Then there exists a $(\mathcal{Z}^\gamma, m, m-(\gamma\log|\mathcal{Z}|-\log |\mathcal{C}|), t)$ secure sketch with error $\delta$ for the Hamming metric on $\mathcal{Z}^\gamma$. 
\end{lemma}
Combining \thref{thm:impSketchArbitraryW} and \lemref{lem:shannon to sketch} gives us the negative result for the Hamming metric: a HILL-entropy secure sketch (for the uniform distribution) implies an information-theoretic one with similar parameters:
\begin{corollary}
\label{cor:rec yields sketch}
Let $\mathcal{Z}$ be an alphabet. Let $(\sketch', \rec')$ be an $(\epsilon,s_{sec})$-HILL-entropy $(\mathcal{Z}^\gamma, \gamma\log |\mathcal{Z}|, \tilde{m}, t)$-secure sketch with error $\delta$ for the Hamming metric over $\mathcal{Z}^\gamma$, with $\rec'$ of circuit size $s_{rec}$.
If $s_{sec}\geq t(s_{rec} + \gamma\log |\mathcal{Z}|)$, then there exists a   $(\mathcal{Z}^\gamma, \gamma\log |\mathcal{Z}|, \tilde{m}-2,t)$ (information-theoretic) secure sketch with error
$4(\epsilon+t\delta)$. 
\end{corollary}
\noindent
\textbf{Note} In \corref{cor:rec yields sketch}, the resulting  $(\sketch, \rec)$ is not guaranteed to be efficient because the proof of \thref{thm:impSketchArbitraryW} is not constructive.  

\corref{cor:rec yields sketch} extends to non-uniform distributions: if there exists a distribution whose HILL sketch retains $\tilde{m}$ bits of entropy, then for all distributions $W$, there is an information theoretic sketch that retains $\Hoo(W) - (\gamma\log |\mathcal{Z}|-\tilde{m})-2$ bits of entropy.

\subsection{Bounds on Secure Sketches using Unpredictability Entropy}
\label{sec:imp unp sketch}
In the previous section, we showed that any sketch that retained HILL entropy could be transformed into an information theoretic sketch.  However, HILL entropy is a strong notion.  In this section, we therefore ask whether it is useful to consider a sketch that satisfies a minimal requirement: the value of the input is computationally hard to guess given the sketch.  We use the notion of relaxed unpredictability entropy (\defref{def:unp entropy}) which captures the notion of ``hard to guess.''

\begin{definition}
$(\sketch, \rec)$ are an \emph{unpredictability-entropy $(\mathcal{M}, m, \tilde{m}, t)$ secure sketch} that is $(\epsilon, s_{sec})$-hard with error $\delta$ if it satisfies \defref{def:secure sketch}, with the security requirement replaced by $H^{\unprlx}_{\epsilon, s_{sec}}(W| \sketch(W))\geq \tilde{m}$.  
\end{definition}
Combining such a secure sketch with a reconstructive extractor yields a computational fuzzy extractor (\lemref{lem:extract from unp}).
The conditional unpredictability entropy $\tilde{m}$ must decrease as $t$ increases. We will prove the result for any metric space that is both neighborhood samplable~(\defref{def:neighborhood samplable}) and where picking a random point in the space is easy. 
\begin{definition}
A metric space space $(\mathcal{M}, \dis)$ is $s_{sam}$-\emph{efficiently-samplable} if there exists a randomized circuit $\sample$ of size $s_{sam}$ that outputs a uniformly random point in $\mathcal{M}$.
\end{definition}

\begin{theorem}
\label{thm:imp of unp entropy}
Let $W$ be a distribution over a metric space $(\mathcal{M}, \dis)$ that is $s_{sam}$ samplable and $(s_{neigh}, t)$ neighborhood samplable.  Furthermore, assume that the number of points within distance $t$ in $\mathcal{M}$ is at least some fixed value $B_t(\cdot)$.  Let $(\sketch, \rec)$ be an unpredictability-entropy $(\mathcal{M}, \Hoo(W), \tilde{m}, t)$ secure sketch that is $(\epsilon, s_{sec})$-secure with error $\delta$.  If $s_{sec} \geq \max\{ t(|\rec| +s_{neigh}), |\rec| + s_{sam}\}$, then $\tilde{m}\leq \log |\mathcal{M}| - \log |B_t(\cdot)| + \log(1-\epsilon -t\delta)$.
\end{theorem}
\begin{proof}
Let $(X, Y)$ be two random variables such that $\delta^{\mathcal{D}_{s_{sec}}}((W, \sketch(W)), (X, Y))\leq \epsilon$.  It suffices to show that $\exists \mathcal{I}$ of size $s_{sec}$ such that $\Pr[\mathcal{I}(Y) = X]\geq |\mathcal{M}| (1-\epsilon -t\delta) / |B_t(\cdot)|$.  

Let $B_t(x)$ denote the random variable representing a random neighbor of distance at most $t$ from $x$ (note that $B_t$ may not be efficiently samplable, because we are assuming only that a neighbor a fixed distance is efficiently samplable).
We begin by showing that \rec must recover points of $X$.  
\begin{claim}
\label{clm:y is recoverable}
\begin{align*}
\Pr[\rec(B_t(X), Y) = X]&=\\
\Pr[(x, y)\leftarrow (X, Y) \wedge x'\leftarrow B_t(x) \wedge \rec(x', y) = x] &\geq 1-\epsilon -t\delta.
\end{align*}
\end{claim}
\begin{proof}
Suppose that $\Pr[\rec(B_t(X), Y) = X]<1-\epsilon -t\delta$.  We construct the following distinguisher $D\in\mathcal{D}_{s_{sec}}$ (the distinguisher design is slightly complicated by the fact that we don't know at which particular distance $t'$ the recover procedure is most likely to fail, so we have to try all distances):
\begin{itemize}
\item Input $w\in \mathcal{M}, s\in\zo^*$.
\item For all $1\leq t'\leq t$: 
\subitem  $w'\leftarrow \neigh(w, t')$.
\subitem If $\rec(w', z) \neq  w$ output $0$.
\item Output $1$.
\end{itemize}
First note that $|D| = t( |\rec|+ s_{neigh} )$.  Since $(\sketch, \rec)$ has error $\delta$ we know that $\forall w, w'\in \mathcal{M}$ where $\dis(w, w')\leq t$ \[ \Pr[s\leftarrow \sketch(w) \wedge \rec(w', s) =  w] \geq 1-\delta.\]  This implies that for all $1\leq t'\leq t$, $\Pr[\rec(\neigh(W, t'), \sketch(W) )= W)  ]\geq 1-\delta$ and thus $\Pr[D(W, \sketch(W)) = 1]\geq 1-t\delta$.  If $\Pr[\rec(B_t(X), Y) = X] < 1-\epsilon -t\delta$ there must exist at least one $1\leq t'\leq t$ for which $\Pr[\rec(\neigh(X, t'), Y) = X] < 1-\epsilon -t\delta$.  Then 
\begin{align*}
 \Pr[D(W, \sketch(W)) = 1]  - \Pr[D(X, Y)=1] 
&\geq (1-t\delta) - \Pr[\rec(\neigh(X, t'), Y) = X] \\
&> (1-t\delta)-(1-t\delta - \epsilon)>\epsilon.
\end{align*}
This is a contradiction and the statement of the claim follows.
\end{proof}
\noindent
We now return to the proof of \thref{thm:imp of unp entropy}.
Now define $\mathcal{I}$ as follows:
\begin{itemize}
\item Input $y\in\zo^*$.
\item Sample $x'\leftarrow \sample$.
\item Output $\rec(x', y)$.
\end{itemize}
Note that $|\mathcal{I}| =  |\rec|+ s_{sam}$. 
We now show that $\mathcal{I}$ predicts $X$:
\begin{align*}
\Pr&[\mathcal{I}(Y) = X]  = \\
&= \sum_{x, y\in \mathcal{M}} \Pr[(X, Y) =(x, y)] \Pr[\mathcal{I}(y) = x]\\
&= \sum_{x, y\in \mathcal{M}} \Pr[(X, Y) =(x, y)] \sum_{x'\in\mathcal{M}} \Pr[\sample = x' ] \Pr[\rec(x', y) =x]\\
&\ge \sum_{x, y\in \mathcal{M}} \Pr[(X, Y) =(x, y)] \sum_{x' | \dis(x', x)\le t} \Pr[\sample = x']\Pr[\rec(x', y) =x]\\
&\ge \sum_{x, y\in \mathcal{M}} \Pr[(X, Y) =(x, y)] \sum_{x'| \dis(x', x)\le t} \frac{|B_t(\cdot)| \Pr[B_t(x) = x'] \Pr[\rec(x', y) =x]}{|\mathcal{M}|}\\
&\geq \frac{|B_t(\cdot)|}{|\mathcal{M}|}(1-\epsilon - t\delta)
\end{align*}
(the last step follows by  \clref{clm:y is recoverable}).
\end{proof}

\noindent
\textbf{Note:} If the input is uniform, the entropy loss is about $\log |B_t(\cdot)|$.  An alternative interpretation of this theorem is that fuzzy min-entropy is at most $\gamma \log |\mathcal{Z}| - \log |B_t(\cdot)|$.

As mentioned at the beginning of~\secref{sec:impossCompSecSketch}, the same entropy loss can be achieved with information-theoretic secure sketches on the uniform distribution by using the randomized code-offset construction. One interpretation of this result is that unpredictability secure sketches are not useful on high entropy distributions. 

\subsection{Implications of negative results}
\label{sec:feas comp sec sketch}
In this chapter, we show that secure sketches that provide pseudoentropy suffer from similar lower bounds as information-theoretic secure sketches.  In \chapref{chap:info theory} we showed a family of distributions that cannot be sketched.  This result extends to the computational setting. By \thref{thm:imposs sketch} and the contrapositive of~\corref{cor:rec yields sketch}, no sketch can retain HILL entropy for the same family of distributions:

\begin{corollary}
\label{cor:imposs comp sketch}
Let $n$ be a security parameter and let $\mathcal{M} = |\mathbb{F}|^\gamma$.  There exists a family of distributions $\mathcal{W}$ over $\mathcal{M}$ such that for each element $W\in \mathcal{W}$, $\Hfuzz(W)= \omega(\log n)$ and for any $(\mathcal{M}, \mathcal{W}, \tilde{m}, t)$-HILL secure sketch $(\sketch, \rec)$ that is $(s_{sec}, \epsilon_{sec})$-hard and error $\delta$.  If 
$s_{sec}\ge t(|\rec| + \gamma \log |\mathbb{F}|)$, 
$t\ge 4$, and 
 $\epsilon_{sec} + t\delta < 1/16$,
 then $\tilde{m} <4$.
\end{corollary}


%\begin{corollary}
%\label{cor:rec yields sketch}
%Let $\mathcal{Z}$ be an alphabet. Let $(\sketch', \rec')$ be an $(\epsilon,s_{sec})$-HILL-entropy $(\mathcal{Z}^n, n\log |\mathcal{Z}|, \tilde{m}, t)$-secure sketch with error $\delta$ for the Hamming metric over $\mathcal{Z}^n$, with $\rec'$ of circuit size $s_{rec}$.
%If $s_{sec}\geq t(s_{rec} + n\log |\mathcal{Z}|)$, then there exists a   $(\mathcal{Z}^n, n\log |\mathcal{Z}|, \tilde{m}-2,t)$ (information-theoretic) secure sketch with error
%$4(\epsilon+t\delta)$. 
%\end{corollary}

\noindent
Secure sketches that provide computational unpredictability are implied the virtual-grey box obfuscation of all polynomial time circuits~\cite{BitanskyCKP14}.  Our negative result bounds unpredictability away from the size of the metric space.  Extraction from unpredictability entropy can be done using an extractor with a reconstruction property (\lemref{lem:extract from unp}); however, a virtual-grey box obfuscator for all polynomial size circuits can simply hide a randomly generated key, and therefore extraction is not necessary to obtain a fuzzy extractor.


\paragraph{Avoiding bounds}

Both of lower bounds arise because \rec must function as an error-correcting code for many points of any indistinguishable distribution.  It may be possible to avoid these bounds if \rec outputs a fresh random variable\footnote{If some efficient algorithm can take the output of $\rec$ and efficiently transform it back to the source $W$, the bounds of \corref{cor:rec yields sketch} and \thref{thm:imp of unp entropy} both apply.  This means that we need to consider constructions that are hard to invert~(either information-theoretically or computationally).}.  Such an algorithm is called a computational fuzzy conductor (\defref{def:comp fuzzy cond}).  Some of our constructions will be computational fuzzy conductors while some will have pseudorandom outputs and thus be computational fuzzy extractors (\defref{def:comp fuzzy extractor}).

\section{Supporting more errors than entropy}
\label{sec:info theory cons}
In the previous section, we showed that computational versions of secure sketches are subject to upper bounds on output entropy.  We now show to build constructing fuzzy extractors that do not contain a secure sketch~(achieving properties that have eluded secure sketches).  In particular, we show an information-theoretic fuzzy extractor that supports \emph{more errors than entropy}.  We describe this condition in \secref{sec:no sketch}.

The construction first condenses entropy from each block of the source and then applies a different fuzzy extractor to the condensed blocks. We'll denote the fuzzy extractor on the smaller alphabet as $(\gen', \rep')$.  A condenser is like a randomness extractor but the output is allowed to be slightly entropy deficient.  Condensers are known with smaller entropy loss than possible for randomness extractors~(e.g.~\cite{dodis2014key}).
\begin{definition}
\label{def:conductor}
A function $\cond : \mathcal{Z}\rightarrow \mathcal{Y}$ is a $(m, \tilde{m}, \epsilon)$-randomness condenser if whenever $\Hoo(W)\ge m$, then there exists a distribution $Y$ with $\Hav(Y|\seed)\ge \tilde{m}$ and \[(\cond(W, \seed), \seed) \approx_\epsilon (Y, \seed).\]
\end{definition}

The main idea of the construction is that errors are ``corrected'' on the large alphabet~(before condensing) while the entropy loss for the error correction is incurred on a smaller alphabet~(after condensing).

\begin{construction}
\label{cons:info theoretic}
Let $\mathcal{Z}$ be an alphabet and let $W=W_1,..., W_\gamma$ be a distribution over $\mathcal{Z}^\gamma$.  We describe $\gen, \rep$ as follows:
\begin{center}
\begin{tabular}{c|c}
\begin{minipage}{3in}
\textbf{\gen}
\begin{enumerate}
\item \underline{Input}: $w = w_1,..., w_\gamma$
\item For $j=1,..., \gamma$:
\begin{enumerate}[(i)]
\item Sample $\seed_i\leftarrow \zo^d$.
\item Set $v_i = \cond(w_i, \seed_i)$.
\end{enumerate}
\item Set $(\key, p') \leftarrow \gen'(v_1,..., v_\gamma)$.
\item Set $p = (p', \seed_1,..., \seed_\gamma)$.
\item Output $(\key, p)$.
\end{enumerate}
 \end{minipage} &
\begin{minipage}{3in}
\textbf{\rep}
\begin{enumerate}
\item \underline{Input}: $(w', p = (p', \vec{\seed}))$
\item For $j=1,..., \gamma$:
\begin{enumerate}[(i)]
\item Set $v_i' = \cond(w_i', \seed_i)$.
\end{enumerate}
\item Output $\key = \rep'(v', p')$.
\end{enumerate}
\vspace{0.7in}
\end{minipage}
\end{tabular}
\end{center}
\end{construction}

\noindent
For \consref{cons:info theoretic} to be secure we need most blocks to contribute some entropy to the output.  We call this notion a partial block source.

\begin{definition}
\label{def:partial source}
A distribution $W = W_1,..., W_\gamma$ is an $(\alpha, \beta)$-partial block source if there exists a set of indices $J$ where $|J| \geq \gamma - \beta$ such that the following holds:
\[
\forall j\in J, \forall w_1,..., w_{j-1} \in W_1,..., W_{j-1}, \Hoo(W_j | W_1 = w_1,..., W_{j-1}=w_{j-1}) \geq \alpha.
\]
\end{definition}
\defref{def:partial source} is a weakening of block sources~(introduced by Chor and Goldreich~\cite{DBLP:journals/siamcomp/ChorG88}), as only some blocks are required to have entropy conditioned on the past.  The choice of conditioning on the past is arbitrary: a more general sufficient condition is that there exists some ordering of indices where most items have entropy conditioned on all previous items in this ordering~(for example, a ``partial'' reverse block source~\cite{vadhan2003constructing}).  This construction is secure and it supports distributions with more errors than entropy.

\begin{lemma}
\label{lem:info theory sec}
%Let $n$ be a security parameter and let $\gamma = \omega(\log n)$.
Let $\mathcal{W}$ be the family of $(\alpha = \Omega(1), \beta\leq \gamma(1-\Theta(1)))$-partial block sources over $\mathcal{Z}^\gamma$ and let $\cond: \mathcal{Z} \times \zo^d\rightarrow \mathcal{Y}$ be a $(\alpha, \tilde{\alpha}, \epsilon_{cond})$-randomness conductor.  Define $\mathcal{V}$ as the family of all distributions with min-entropy at least $\tilde{\alpha}(\gamma-\beta)$ and let $(\gen', \rep')$ be $(\mathcal{Y}^\gamma, \mathcal{V}, \kappa, t, \epsilon_{fext})$-fuzzy extractor with error $\delta$.\footnote{We actually need $(\gen', \rep')$ to be an average case fuzzy extractor~(see \cite[Definition 4]{DBLP:journals/siamcomp/DodisORS08} and the accompanying discussion).  Most known constructions of fuzzy extractors are average-case fuzzy extractors.  For simplicity we refer to $\gen', \rep'$ as simply a fuzzy extractor.}  Then $(\gen, \rep)$ is a $(\mathcal{Z}^\gamma, \mathcal{W}, \kappa, t, \gamma\epsilon_{cond}+\epsilon_{fext})$-fuzzy extractor with error $\delta$.

\end{lemma}

\begin{proof}[Proof of \lemref{lem:info theory sec}]
Let $W\in \mathcal{W}$.
It suffices to argue correctness and security.  We first argue correctness.
When $w_i = w_i'$, then $\cond(w_i , \seed_i) = \cond(w_i', \seed_i)$ and thus $v_i = v_i'$.  Thus, for all $w, w'$ where $\dis(w, w')\le t$, then $\dis (v, v')\le t$.  Then by correctness of $(\gen', \rep')$, $\Pr[(r, p)\leftarrow \gen'(v) \wedge r'\leftarrow \rep(v',p) \wedge r' = r]\ge 1-\delta$.

We now argue security.  Denote by $\seed$ the random variable consisting of all $\gamma$ seeds and $V$ the entire string of generated $V_1,..., V_\gamma$.  To show that \[\Key | P, \seed \approx_{\gamma \epsilon_{cond} + \epsilon_{fext}} U | P, \seed,\] it suffices to show that $\Hav(V | seed)$ is $\gamma \epsilon_{cond}$ close to a distribution with average min-entropy $\tilde{\alpha}(\gamma - \beta)$.  The lemma then follows by the security of $(\gen', \rep')$.

We now argue that there exists a distribution $Y$ where $\Hav(Y | seed)\ge \tilde{\alpha}(\gamma - \beta)$ and $(V, seed_1,..., seed_\gamma)\approx (Y, seed_1,.., seed_\gamma)$.  First note since $W$ is $(\alpha, \beta)$-partial block distribution that
there exists a set of indices $J$ where $|J| \geq \gamma - \beta$ such that the following holds:
\[
\forall j\in J, \forall w_1,..., w_{j-1} \in W_1,..., W_{j-1}, \Hoo(W_j | W_1 = w_1,..., W_{j-1}=w_{j-1}) \geq \alpha.
\]
Then consider the first element of $j_1\in J$, $\forall w_1,..., w_{j_1-1}\in W_1,..., W_{j_1-1}$,
\[\Hoo(W_{j_1} | W_1 = w_1,..., W_{j_1-1} = w_{j_1-1})\ge \alpha.\]

%\[(\ext (W_1, seed_1), seed_1) \approx_{\epsilon} (U_{\mathcal{Y}}, seed_1)\]
%Then since $W$ is a block-source,
%\[ \forall w_1\in W_1, \Hoo(W_2 | w_1)\ge k.\]
\noindent
Thus, there exists a distribution $Y_{j_1}$ with $\Hav(Y_{j_1} | seed_{j_1}) \ge \tilde{\alpha}$ such that
\[(\cond (W_{j_1}, seed_{j_1}), seed_{j_1}, W_1,..., W_{j_1-1}) \approx_{\epsilon_{cond}} (Y_{j_1}, seed_{j_1}, W_1,..., W_{j_1-1})\]
and since $(seed_1,..., seed_{j_1})$ are independent of these values
\begin{align*}
(\cond& (W_{j_1},seed_{j_1}), W_{j_1-1},..., W_1, seed_{j_1}, ..., seed_{1}) \\&\approx_{\epsilon_{cond}} (Y_{j_1}, W_{j_1-1},..., W_1, seed_{j_1}, , ...,  seed_{1})\end{align*}
let $Z_{j_1} \overset{def}=( Y_{j_1}, \cond(W_{j_1-1},seed_{j_1-1}),..., \cond(W_{1}, seed_{1}))$ and note that \[\Hav(Z_{j_1} | seed_1,...,seed_{j_1})\ge \alpha'.\]
Applying a deterministic function does not increase statistical distance and thus,
\begin{align*}
(\cond (W_{j_1}, seed_{j_1}), \cond(W_{j_1-1}, seed_{j_1-1}),..., \cond(W_1, seed_1), seed_{j_1},..., seed_{1}) \\\approx_{\gamma \epsilon_{cond}} (Z_{j_1}, seed_{j_1},..., seed_1)
\end{align*}

\noindent
By a hybrid argument there exists a distribution $Z$ with $\Hav(Z | seed) \ge \tilde{\alpha}(\gamma -\beta)$ where
\[
(\cond(W_\gamma, seed_\gamma), ..., \cond(W_1, seed_1), seed_\gamma,..., seed_1) \approx_{\gamma \epsilon_{cond}} (Z, seed_\gamma,...,  seed_1).\]
%By the security of $(\sketch, \rec)$ we know that $\Hav(Z | seed, ss) \ge \tilde{m}$.  %Note that $U_{\mathcal{Y}}^\gamma$ is independent of $seed_1,..., seed_\gamma$ and so $\Hav(U_{\mathcal{Y}}^\gamma, seed_1,..., seed_\gamma, p)\ge \tilde{m}$.
%That is,
%\[
%(V, seed_1,..., seed_\gamma, p) \approx_{\epsilon\gamma} (U_{\mathcal{Y}}^\gamma, seed_1,..., seed_\gamma, p).\]
This completes the proof.
\end{proof}

\paragraph{More errors than entropy}

In this section we show that \consref{cons:info theoretic} supports partial block sources with more errors than entropy.  The structure of a partial block source implies that  $\Hoo(W) \ge \alpha (\gamma-\beta ) = \Theta(\gamma)$.  We assume that $\Hoo(W) = \Theta(\gamma)$. The condenser of Dodis et al~\cite{dodis2014key} has a constant entropy loss, so $\alpha-\tilde{\alpha} = \Theta(1)$. This means that the input entropy to $(\gen', \rep')$ is $\Theta(\gamma)$.   We assume that the new alphabet $\mathcal{Y}$ is of constant size.  Standard fuzzy extractors on constant size alphabets correct a constant fraction of errors at a entropy loss of $\Theta(\gamma)$, yielding $\kappa = \Theta(\gamma)$.  Thus, our construction is secure for distributions with more errors than entropy whenever $|\mathcal{Z}| = \omega(1)$.
More formally:
\[
\text{\# Errors} - \text{Entropy} = \log |B_t| - \Hoo(W) \ge  t \log |\mathcal{Z}| - \Theta(\gamma)-= \Theta(\gamma) \log |\mathcal{Z}| - \Theta(\gamma)  > 0
\]
That is, there exists a super-constant alphabet size for which \consref{cons:info theoretic} is secure with more errors than entropy.




