%!TEX root = thesis.tex

In \chapref{chap:comp fuzz}, we showed it was possible to construct a computational fuzzy extractor whose output key length was as long as the starting entropy.  In this chapter, we show practical constructions of computational fuzzy extractors with additional properties.  Both constructions are based on point obfuscation and can support more errors than entropy.  In this section, we do not concentrate on the length of \key as computational techniques we use can expand the key.  We instead focus on supporting a wide classes of sources.

\section{A reusable computational fuzzy extractor}
\label{sec:sampling}


In \consref{cons:info theoretic}, we showed  a fuzzy extractor for a family of distributions with more errors than entropy.  Using computational techniques we are able to retain many of the advantages of \consref{cons:info theoretic} and achieve a reusable fuzzy extractor.

The construction samples a random subset of blocks $W_{j_1},..., W_{j_\eta}$ and obfuscates the concatenation of these blocks.  Denote this concatenated value by $V_1$.  This process is repeated to produce $V_1,..., V_\ell$ where at least one $V_i$ should be correct to ``unlock'' the correct key.
Let $\sample_{\gamma, \eta}(\cdot)$ be an algorithm that  outputs a random subset of $\{1,..., \gamma\}$ of size $\eta$ given let $r_{sam}$ bits of randomness.

\begin{construction}[Sample-then-Obfuscate]
\label{cons:sampling}
Let $\mathcal{Z}$ be an alphabet, and let $W = W_1,..., W_\gamma$ be a source where each $W_j$ is over $\mathcal{Z}$.
Let $\eta$ be a parameter, and $\mathcal{O}$ be an obfuscator for the family of digital lockers with $\kappa$-bit outputs.  Define $\gen, \rep$ as:

\begin{center}
\begin{tabular}{c|c}
\begin{minipage}{3in}
\textbf{\gen}
\begin{enumerate}
\item \underline{Input}: $w = w_1,..., w_\gamma$
\item Sample $\key \overset{\$}\leftarrow \zo^\kappa$.
\item For $i=1,..., \ell$:
\begin{enumerate}[(i)]
\item Select $\lambda_i\overset{\$}\leftarrow \zo^{r_{sam}}$.
\item Set $j_{i, 1},..., j_{i, \eta}\leftarrow \sample_{\gamma,\eta}( \lambda_i)$
\item Set $v_i = w_{j_{i,1}},..., w_{j_{i, \eta}}$.
\item Set $\rho_i = \mathcal{O}(I_{v_i, r})$.
\item Set $p_i = \rho_i, \lambda_i$.
\end{enumerate}
\item Output $(\key, p)$, where $p=p_1\dots p_\ell$.
\end{enumerate}
 \end{minipage} &
\begin{minipage}{3in}
\textbf{\rep}
\begin{enumerate}
\item \underline{Input}: $(w'=w_1',..., w_\gamma', p)$
\item For $i=1,..., \ell$:
\begin{enumerate}[(i)]
\item Parse $p_i$ as $\rho_i, \lambda_i$.
\item $j_{i, 1},..., j_{i, \eta}\leftarrow \sample_{\gamma, \eta}(\lambda_i)$.
\item Set $v_i' = w_{j_{i, 1}}',..., w_{j_{i, \eta}}'$.
\item Set $\rho_i(v_i') = r_i$.  \\If $\key_i\neq \perp$ output $\key_i$.
\end{enumerate}
\item Output $\perp$.
\end{enumerate}
\vspace{0.37in}
\end{minipage}
\end{tabular}
\end{center}
\end{construction}

\noindent
%There are three main differences between this construction and \consref{cons:first construction}.  They are as follows:
%\begin{itemize}
%\item Multiple blocks are concatenated together.  This comes at a cost to error tolerance but allows us to significant decrease the required entropy in each block.  This paradigm is similar to \emph{sample-then-extract} from the locally computable extractors literature~\cite{lu2002hyper,vadhan2003constructing}.  For this reason we call \consref{cons:sampling} \emph{sample-then-obfuscate}.
%\item Instead of encoding a bit of the key with each obfuscated value we encode the entire key with each obfuscated value.  This requires the use of digital lockers in place of point obfuscations.  Instead of having to open ``most'' of the obfuscations, it is only necessary for us to open a single obfuscation.  \consref{cons:first construction} only hid part of the key in each obfuscation.  This allowed some blocks in the distribution to be ``weak.''  However, sampling smoothes out $W$ so that all $V_i$ are ``good'' simultaneously.
%\item \consref{cons:first construction} required a large alphabet as blocks were individually obfuscated.  This construction works for an arbitrary size alphabet.  We show it supports $\Huse (W)\le 0$ when the alphabet size is super-constant in the security parameter.
%\end{itemize}
The use of a computational primitive~(obfuscation of digital lockers) allows us to sample multiple times, because we need to argue only about individual entropy of $V_i$, as opposed to the information-theoretic setting, where it would be necessary to argue about the entropy of the joint variable $V$.  This is the property that allows reusability.

This construction uses a na\"{i}ve sampler that takes truly random samples, but the public randomness may be substantially decreased by using more sophisticated samplers. (See Goldreich~\cite{goldreich2011sample} for an introduction to samplers.)

\begin{theorem}
\label{thm:sampling}
Let $\mathcal{Z}$ be an alphabet.  Let $n$ be a security parameter.  Let $\mathcal{W}$ be the family of $(\alpha = \Omega(1), \beta\leq \gamma(1-\Theta(1)))$-partial block sources over $\mathcal{Z}^\gamma$ where $\gamma =\Omega(n)$.  Let $\eta$ be such that $\eta = \omega(\log n)$ and $\eta = o(\gamma)$, and let $c> 1$ be a constant and $\ell$ be such that $\ell = n^c$.  Let $\mathcal{O}$ be an $\ell$-composable VGB obfuscator for digital lockers~(with $\kappa$ bit outputs) with auxiliary inputs.  Then for every $s_{sec} = \poly(n)$ there exists some $\epsilon_{sec} = \ngl(n)$ such that \consref{cons:sampling} is a $(\mathcal{Z}^\gamma, \mathcal{W}, \kappa, t)$-computational fuzzy extractor that is $(\epsilon_{sec}, s_{sec})$-hard with error $\delta$ for
\begin{align*}
t&\leq -\frac{(c-1)}{2} \frac{(\gamma-\eta)\log n}{\eta} = o(\gamma)\\
\delta &= e^{-n}
\end{align*}
\end{theorem}
 
\subsection{Security of \consref{cons:sampling}}
\label{ssec:sec cons sampling}
In this section we show security of \consref{cons:sampling}.  
With overwhelming probability, at each of the $\ell$ iterations, the sampler will choose enough coordinates of $W$ that have high entropy, making $V_i$ have sufficient entropy.   Once each of the $V_1,..., V_\ell$ have high entropy the obfuscations are unlikely to return a value other than $\perp$ to an adversary.
% forms a block-unguessable distribution.  Then security essentially follows from the security of \consref{cons:first construction}. %follows very similarly to \lemref{lem:security of cons}~(used to show security of \consref{cons:first construction}).
%Essentially, the argument is that an adversary will never be able to open a digital locker and thus they learn no information about the key.  %By the same argument of Then \consref{cons:sampling} is just \consref{cons:first construction} applied to $V_1,.., V_\ell$, and security follows by \lemref{lem:security of cons}.
We begin by showing that each $V_i$ is statistically close to a high entropy distribution.   Let $\Lambda$ represent the random variable of all the coins used by $\sample$ and $\lambda=\lambda_1 \dots \lambda_\ell$
be some particular outcome.

\begin{lemma}
\label{lem:sampling works}
Let all variables be as in \thref{thm:sampling}.
There exists $\epsilon_{sam} = O(e^{-\eta}) = \ngl(n)$ and $\alpha' = \alpha\eta(\gamma-\beta-\eta)/\gamma = \omega(\log n)$ such that for each $i$,
\[
\Pr_{\lambda\leftarrow \Lambda}[\Hoo(V_i | \Lambda= \lambda) \geq \alpha'] \geq 1- \epsilon_{sam}.
\]
\end{lemma}
\begin{proof}
Consider some fixed $i$.
Recall that there a set $J$  of size $\gamma - \beta = \Theta(\gamma)$ such that each $w$ and  block $j\in J$, $\Hoo(W_j | W_1 = w_1,..., W_{j-1}=w_{j-1}, W_{j+1}=w_{j+1},..., W_\gamma = w_\gamma) \geq \alpha$.  Since this is a worst case guarantee, the entropy of $V_i$ can be deduced from the number of symbols in $V_i$ that come from $J$. Namely, Denote by $X= |\{j_{i, 1},..., j_{i, \eta}\}\cap J|$.

\begin{claim}
\label{cl:vi have entropy}
\[
\Hoo(V_i |\Lambda = \lambda ) \geq \alpha X.
\]
\end{claim}
\begin{proof}
Denote by $j_1,..., j_\eta$ the indices selected by the randomness $\lambda_i$.  We begin by noting that 
\begin{align*}\Hoo(V_i |\Lambda = \lambda ) &= -\log \max_{v\in V_i} \Pr[ V_i =v | \Lambda =\lambda] \\&= -\log \max_{w_{j_1}, ..., w_{j_\eta}} \Pr[W_{j_1} = w_{j_1} \wedge \dots \wedge W_{j_\eta} w_{j_\eta}] .
\end{align*}  Then
\begin{align*}
\max_{w_{j_1},..., w_{j_\eta}} &\Pr[ W_{j_1}=w_{j_1} \wedge \dots \wedge W_{j_\eta} = w_{j_\eta}]\\
&= \max_{w_{j_1},..., w_{j_\eta}} \prod_{k=1}^\eta \Pr[W_{j_k} = w_{j_k} | W_{j_{k-1}} = w_{j_{k-1}} \wedge ... \wedge W_{j_1} = w_{j_1}]\\
&\le \prod_{k=1}^\eta \max_{w_{j_1},..., w_{j_\eta}} \Pr[W_{j_k} = w_{j_k} | W_{j_{k-1}} = w_{j_{k-1}} \wedge ... \wedge W_{j_1} = w_{j_1}]\\
&\le\prod_{k=1}^\eta \max_{w_1,..., w_\gamma} \Pr[W_{j_k} = w_{j_k} | W_1 = w_1 \wedge ... \wedge W_{{j_k}-1} = w_{{j_k}-1} ]\\
\end{align*}
Taking the negative logarithm of both sides we have that
\begin{align*}
\Hoo(V_i | \Lambda = \lambda) &\ge \sum_{k=1}^\eta \min_{w_1,..., w_\gamma} \Hoo(W_{j_k} | W_1 = w_1 \wedge ... \wedge W_{{j_k}-1} = w_{{j_k}-1})\\
&\ge \sum_{j_k\in J} \alpha = \alpha X
\end{align*}
This completes the proof of \clref{cl:vi have entropy}.
\end{proof}

\noindent
We note that $X$ is distributed according to the hypergeometric distribution,
and that $\expe[X]=\eta(\gamma-\beta)/\gamma$. Using the tail bounds from~\cite{chvatal1979tail,scala2009hypergeometric}, we can conclude that $\Pr[X\le \expe[X]/2]\le e^{-2((\gamma-\beta)/2\gamma)^2 \eta}=O(e^{-\eta})$.
Thus, setting $\alpha'=\frac{\alpha \eta(\gamma-\beta)}{2\gamma}$ and applying \clref{cl:vi have entropy}, we conclude that
 \[
\Pr[\Hoo(V_i ) \geq \alpha'] \geq 1- O(e^{-\eta}).
\]
This completes the proof of \lemref{lem:sampling works}.
\end{proof}


\noindent
We can then argue that all $V_i$ simultaneously have individual entropy with good probability~(by union bound):
\begin{corollary}
\label{cor:samp sec}
 Let $\epsilon_{sam}$ and $\alpha'$ be as in \lemref{lem:sampling works}, and all the other variables be as in \thref{thm:sampling}.  Then $\Pr_{\lambda\leftarrow \Lambda}[\forall i, \Hoo(V_i | \Lambda = \lambda)  \ge \alpha'] \geq 1-\ell\epsilon_{sam}$.
%[V_i(V, \Lambda)$ is $(\ell\epsilon_{sam})$-close to a distribution $(V', U_{\ell\times r_{sam}})$ where for $u\in U_{\ell\times r_{sam}}$ for all $i$, $\Hoo(V_i' | U_{\ell\times r_{sam}} =u)\geq \alpha'$.
\end{corollary}
%\begin{proof}
%Union bound over the probability in \lemref{lem:sampling works}.
%%Hybrid argument over the statistical distance in \lemref{lem:sampling works}.
%\end{proof}

Once all $V_i$ all simultaneously have good entropy, the adversary only sees $\perp$ as an output from the obfuscations~(with overwhelming probability).  If the adversary only sees $\perp$ from the obfuscations, they have no information about $\key$.  
This is each output $V_1,..., V_\ell$ is hard to guess.  We call this type of distribution block unguessable source:\footnote{In this definition we allow there to be a set of weak blocks.  \consref{cons:first construction} is secure for sources that satisfy this weaker definition.}
\begin{definition}
\label{def:block guessable}
Let $I_v (\cdot, \cdot)$ be an oracle that returns \[I_v(j, v_j')=
\begin{cases}
1 & v_j = v_j'\\
0 & \text{otherwise}.
\end{cases}
\]
A source $V = V_1| ... |V_\gamma$ is a $(q, \alpha, \beta)$-\emph{unguessable block source} if there exists a set $J\subset\{1,..., \gamma\}$ of size at least $\gamma -\beta$ such that for any unbounded adversary $S$ with oracle access to $I_v$ making at most $q$ queries
\[
\forall j\in J, \Hav(V_j |View(S^{I_{V}(\cdot, \cdot)}))\geq \alpha.
\]
\end{definition}
\noindent
This is made formal in the following corollary:
\begin{corollary}
\label{cor:v are unguessable}
Let $\epsilon_{sam}, \alpha'$ be as in \lemref{lem:sampling works},  and all the other variables be as in \thref{thm:sampling}. Take any $q=\poly(n)$.  For $\alpha'' =\alpha'-1-\log (q+1) =  \omega(\log n)$, with  probability $1-\ell \epsilon_{sam}$ over the choice of $\Lambda=\lambda$, the distribution $V| \Lambda=\lambda$ is a $(q, \alpha'', 0)$-unguessable block source.
\end{corollary}

Finally, we can show the construction is secure if the inputs form a unguessable block source.  
\begin{lemma}
\label{lem:samp unguess}
Let all the variables be as in \thref{thm:sampling}.
For every $s_{sec} = \poly(n)$ there exists $\epsilon_{sec} = \ngl(n)$ such that $\delta^{\mathcal{D}_{s_{sec}}}((\Key, P), (U_{\kappa}, P))< \epsilon_{sec}$.
\end{lemma}
\begin{proof}

Let $\mathcal{O}$ be a $\ell$-composable VGB obfuscator with auxiliary input for digital lockers over $\mathcal{Z}$\footnote{In this proof we only consider the case where the sampling has produced a block unguessable source.  The negligible portion of the time when this does not happen in included in the security of \thref{thm:sampling}}  .  Let $V$ be a $(q, \alpha'' = \omega(\log n), 0)$-unguessable block source.  Our goal is to show that for all $s_{sec} = \poly(n)$ there exists $\epsilon_{sec} =\ngl(n)$ such that $\delta^{\mathcal{D}_{s_{sec}}}((R, P), (U, P))\le \epsilon_{sec}$.

Suppose not, that is suppose there is some $s_{sec} = \poly(n)$ such that exists $\epsilon_{sec} = \poly(n)$ and $\delta^{\mathcal{D}_{s_{sec}}}((\Key, P), (U, P))> \epsilon_{sec}$.
Let $D$ be such a distinguisher of size at most $s_{sec}$.  That is,
\[
| \expe[D(\Key, P)] - \expe[D(U, P)] > \epsilon_{sec} = 1/\poly(n).
\]
Define the oracle $I_{v_1, ..., v_\ell, r}(\cdot, \cdot)$ as follows:
\[I_{v_1,..., v_\ell, \key}(x, i) =
\begin{cases}
\key & v_i = x\\
\perp & \text{otherwise.}
\end{cases}\]
By the security of obfuscation~(\defref{def:obf}), there exists a unbounded time simulator $S$~(making at most $q$ queries) such that
\begin{align}
\label{eq:dist before}
|\expe [D(\Key, P_1,..., P_\ell)] - \expe [S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(\Key, 1^{\ell \log |Z|})] |\leq \epsilon_{sec}/3.
\end{align}
We now prove $S$ cannot distinguish between $\Key$ and $U$.
\begin{lemma}
\label{lem:sim cannot distinguish samp}
$\sd(S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(\Key, 1^{\ell \log |Z|}), S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(U, 1^{\ell \log |Z|})) \le \ell 2^{-\alpha''}$.
\end{lemma}

\begin{proof}
\noindent It suffices to show that for any two values in $\zo^\kappa$, the statistical distance is at most $\ell 2^{-\alpha''}$.
\begin{lemma}
\label{lem:codewords in I close samp}
Let $\key$ be true value encoded in $I$ and let $u\in \zo^\kappa$.  Then,
\[
\sd( S^{I_{v_1, ..., v_\ell, \key}(\cdot, \cdot)}(\key, 1^{\ell \log |Z|}), S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(u, 1^{\ell \log |Z|})) \le \ell 2^{-\alpha''}.
\]
\end{lemma}
\begin{proof}
Recall that for all $j$, $\Hav(V_j | View(S))\geq \alpha''$.  The only information about the correct value of $r$ is contained in the query responses.  When all responses are $\perp$ the view of $S$ is identical when presented with $\key$ or $u$.  We now show that for any value of $\key$ all queries return $\perp$ with probability $1-2^{-\alpha''}$.  Suppose not, that is suppose, the probability of at least one nonzero response is $> 2^{-(\alpha'')}$.

 When there is a response other than $\perp$ for some $j$ this means that there is no remaining min-entropy in $V_j$.  If this occurs with over $2^{-\alpha''}$ probability this violates the block unguessability of $V$~(\defref{def:block guessable}).  By the union bound over the indices $j$ the total probability of a response other than $\perp$ is at most $\ell 2^{-\alpha''}$. Thus, for all $\key, u$ the statistical distance is at most $\ell 2^{-\alpha''}$.  This concludes the proof of \lemref{lem:codewords in I close samp}.
\end{proof}
By averaging over all points in $\zo^\kappa$ we conclude that
\[\sd(S^{I_{v_1, ..., v_\ell, r}X(\cdot, \cdot)}(\Key, 1^{\ell \log |Z|}), S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(U, 1^{\ell \log |Z|})) < \ell 2^{-\alpha''}.\]  This completes the proof of \lemref{lem:sim cannot distinguish samp}.
\end{proof}

\noindent Now by the security of obfuscation we have that
\begin{align}
\label{eq:dist after}
|\expe [D(\Key, P_1,..., P_\ell) ]- \expe [S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(\Key, 1^{\ell \log |Z|})] |\leq \epsilon_{sec}/3.
\end{align}
Combining Equations~\ref{eq:dist before} and~\ref{eq:dist after} and \lemref{lem:sim cannot distinguish samp}, we have
\begin{align*}
\delta^{D}((\Key, P), (U, P))&\leq |\expe [D(\Key, P_1,..., P_\ell)] - \expe [S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(\Key, 1^{\ell \log |Z|})]| \\
&+|\expe[S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(\Key, 1^{\ell \log |Z|})] - \expe[S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(U, 1^{\ell \log |Z|})] |\\
&+|\expe [S^{I_{v_1, ..., v_\ell, r}(\cdot, \cdot)}(U, 1^{\ell \log |Z|})] - \expe [D(U, P_1,..., P_\ell) ]|\\
&\leq \epsilon_{sec}/3+ \ell 2^{-\alpha''}+\epsilon_{sec}/3 \\
&\leq 2\epsilon_{sec}/3 + \ngl(n) < \epsilon_{sec}.
\end{align*}
This is a contradiction and completes the proof of \lemref{lem:samp unguess}.
\end{proof}


\subsection{Correctness of \consref{cons:sampling}}
\label{sec:correct sampling}
We encode the entire key in each obfuscation.  For correctness, at least one of the repeated readings must be correct with overwhelming probability.  Let $V_i$ represent one of the initial readings and $V_i'$ represent a repeated reading.  For showing correctness we must show that $\Pr[\forall i, V_i \neq V_i'] < \ngl(n)$.



\begin{lemma}
\label{lem:sampling errors}
Let all the variables be as in \thref{thm:sampling}.
 Then $\Pr[\forall i, v_i\neq v_i'] < \ngl(n)$, where the probability is over the coins of $\gen$.
\end{lemma}
\begin{proof}

Recall that $\dis(w, w')\leq t$ and that the locations of the errors is independent of the selected locations.  Denote by $\mu = -\frac{(c-1)\log n}{2}$.  Since $\eta = \omega(\log n)$, we will assume
$\eta\ge 2\mu$.  We begin by computing the probability that a single $v_i = v_i'$.
\begin{align*}
\Pr[v_i = v_i'] &= \Pr[w\text{ and }w'\text{ agree on positions }j_{i,1},..., j_{i,\eta}]\\
&\ge \prod_{j=0}^{\eta-1} \left( 1- \frac{t}{\gamma -j }\right) \ge \prod_{j=0}^{\eta-1}\left(1-\frac{\mu(\gamma-\eta)/\eta}{\eta-j}\right)\\
&\ge \prod_{j=0}^{\eta-1} \left( 1- \frac{\mu}{\eta}\left(\frac{\gamma-\eta}{\gamma -j }\right)\right)\ge \prod_{j=0}^{\eta-1}\left(1-\frac{\mu}{\eta}\right)\\
&= \left(1-\frac{\mu}{\eta}\right)^{\eta} =\left( \left(1-\frac{\mu}{\eta}\right)^{\eta/\mu}\right)^\mu\geq \left(\frac{1}{2}\right)^{2\mu}\\
&\ge \left(\frac{1}{2}\right)^{(c-1) \log n}= \frac{1}{n^{c-1}}.
\end{align*}
We then have the probability that all $v_i\neq v_i'$ as:
\begin{align*}
\Pr[\forall i, v_i \neq v_i'] &= \left(1-\Pr[v_i= v_i']\right)^\ell\\
&=\left( 1- \frac{1}{n^{c-1}}\right)^\ell =\left(\left( 1- \frac{1}{n^{c-1}}\right)^{n^{c-1}}\right)^{\ell /n^{c-1}}\\
&\le \left(\frac{1}{e}\right)^{n^c/n^{c-1}} = \frac{1}{e^n}.
\end{align*}
This completes the proof of \lemref{lem:sampling errors}.
\end{proof}



\subsection{Reusability of \consref{cons:sampling}}
The reusability of \consref{cons:sampling} follows from the security of the VGB obfuscator with auxiliary input.  We consider a bounded $q = \poly(n)$ number of reuses.  For some fixed $i\in \{1,..., q\}$ we will treat the remaining keys as auxiliary input to the adversary, and the simulator still performs comparably to a distinguisher with access to the obfuscations.  Thus, given sufficiently strong reusability we achieve the following result:

\begin{theorem}
\label{thm:reusability}
Let $q = \poly(n)$, and let all the variables be as in \thref{thm:sampling}, except that $\mathcal{O}$ be an $\ell\times q$-composable VGB obfuscator for digital lockers~(with $\kappa$ bit outputs) with auxiliary inputs.  For any admissible $f_2,..., f_q$, for all $s_{sec} = \poly(n)$ there exists some $\epsilon_{sec} = \ngl(n)$ such that $(\gen, \rep)$ is $(q, \epsilon_{sec}, s_{sec}, f_2,..., f_q)$-reusable fuzzy extractor.
\end{theorem}
\begin{proof}
The only modification to the proof is in \lemref{lem:samp unguess} with the other keys $\Key_1,..., \Key_{i-1}, \Key_{i+1}, ..., \Key_q$ treated as additional auxiliary input to the adversary/simulator.  The simulator in the definition of composable obfuscation is required to function for arbitrary circuits in the family even if the choice of these circuits depends on the previous obfuscations.  Thus allows reading $w_i$ to be chosen depending on public values $p_1,..., p_{i-1}$.
\end{proof}

\paragraph{More errors than entropy?}
We now show \consref{cons:sampling} supports partial block sources with more errors than entropy.  The structure of the partial block source implies that $\Hoo(W) \ge \alpha (\gamma-\beta ) = \Theta(\gamma)$.  We assume that $\Hoo(W) = \Theta(\gamma)$.  We are able to correct $o(\gamma)$ errors.
This yields:
\[
\text{\# Errors} - \text{Entropy} =  \log |B_t| -\Hoo(W) \ge t \log |\mathcal{Z}| - \Theta(\gamma)= o(\gamma) \log |\mathcal{Z}| - \Theta(\gamma)
\]
That is, there exists a super-constant alphabet size for which \consref{cons:sampling} is secure with more errors than entropy.  

\textbf{Notes:} \consref{cons:sampling} works for an arbitrary size alphabet; however, for a constant size alphabet, the required entropy is greater than the number of corrected error patterns.  However, \consref{cons:sampling} is reusability for an arbitrary size alphabet.

In the analysis of \consref{cons:sampling} we restricted our attention to partial block sources, to allow for an easy comparison with \consref{cons:info theoretic}.  However,  in fact \consref{cons:sampling}  is secure for any source where sampling produces a high entropy string~(entropy $\omega(\log n)$) with overwhelming probability. For example, it is secure for sources with symbols that are $\omega(\log n)/\log |\mathcal{Z}|$-wise independent. 


\section{Allowing Correlated Symbols}
\label{sec:cor construction}

In the previous section, we presented a reusable computational fuzzy extractor that supported sources with more errors than entropy.
Unfortunately, both Constructions~\ref{cons:info theoretic} and \ref{cons:sampling} required each symbol to contribute ``fresh'' entropy.  In this section, we present a computational construction that allows for correlation between symbols while still supporting more errors than entropy and correcting a constant fraction of errors.
This construction is inspired by the construction of digital lockers from point obfuscation by Canetti and Dakdouk~\cite{canetti2008obfuscating}.  Instead of having large parts of the string $w$ unlock $\key$, we have individual symbols unlock bits of the output. The construction that follows is a computational fuzzy conductor (\defref{def:comp fuzzy cond}) not a computational fuzzy extractor (\defref{def:comp fuzzy extractor}, so we call its output $c$ to distinguish from $\key$.

Before presenting the construction we provide some definitions from error correcting codes.
We use error-correct codes over $\{0,1\}^\gamma$ which correct up to $t$ bit flips from $0$ to $1$ but no bit flips from $1$ to $0$ (this is the Hamming analog of the $Z$-channel~\cite{tallini2002capacity}).\footnote{Any code that corrects $t$ Hamming errors also corrects $t$ $0\rightarrow 1$ errors, but more efficient codes  exist for this type of error~\cite{tallini2002capacity}.
Codes with $2^{\Theta(\gamma)}$ codewords and $t = \Theta(\gamma)$ over the binary alphabet exist for Hamming errors and suffice for our purposes~(first constructed by Justensen~\cite{justesen1972class}).  These codes also yield a constant error tolerance for $0\rightarrow 1$ bit flips.
The class of errors we support in our source~($t$ Hamming errors over a large alphabet) and the class of errors for which we need codes~($t$ $0\rightarrow 1$ errors) are different.  Use of a code that corrects $t$ Hamming errors gives the construction perfect correctness.
}
\begin{definition}
\label{def:hamming z channel}
Let $e, c\in \zo^\gamma$ be vectors.  Let $x = \error(c, e)$ be defined as follows
\[x_i = \begin{cases} 1 & c_i=1 \vee e_i=1\\
0& \text{ otherwise}.\end{cases}\]
\end{definition}

\begin{definition}
A set $C$~(over $\zo^\gamma$) is a $(t, \delta_{code})$-Z code if there exists an efficient procedure $\decode$ such that \[\forall e \in \zo^\gamma | \weight(e)\le t, \Pr_{c\in C}[\decode(\error(c,e)) \neq c] \leq \delta_{code}.\]
\end{definition}

\begin{construction}
\label{cons:first construction}
Let $\mathcal{Z}$ be an alphabet and let $W = W_1,..., W_\gamma$ be a distribution over $\mathcal{Z}^\gamma$.  Let $\mathcal{O}$ be an obfuscator for point functions with points from $\mathcal{Z}$.  Let  $C\subset \zo^\gamma$ be an error-correcting code.
We describe $\gen, \rep$ as follows:

\begin{center}
\begin{tabular}{c|c}
\begin{minipage}{3in}
\textbf{\gen}
\begin{enumerate}
\item \underline{Input}: $w = w_1,..., w_\gamma$
\item Sample $c\leftarrow C$.
\item For $j=1,..., \gamma$:
\begin{enumerate}[(i)]
\item If $c_j = 0$: $p_j = \mathcal{O}(I_{w_j})$.
\item Else: $r_j \overset{\$}\leftarrow \mathcal{Z}$.
\subitem Let $p_j = \mathcal{O}(I_{r_j})$.
\end{enumerate}
\item Output $(c, p)$, where $p=p_1\dots p_\gamma$.
\end{enumerate}
 \end{minipage} &
\begin{minipage}{3in}
\textbf{\rep}
\begin{enumerate}
\item \underline{Input}: $(w', p)$
\item For $j=1,..., \gamma$:
\begin{enumerate}[(i)]
\item If $p_j(w_j') = 1$: set $c_j' = 0$.
\item Else: set $c_j' = 1$.
\end{enumerate}
\item Set $c = \decode(c')$.
\item Output $c$.
\end{enumerate}
\vspace{0.15in}
\end{minipage}
\end{tabular}
\end{center}
\end{construction}

%\bnote{shorten this discussion?}
%The input $w$ is hidden in two different ways.  In locations where $c_j=1$, the block $w_j$ is information-theoretically unknown.
%In locations where $c_j=0$, it is hard to find $w_j$ given access to the point obfuscation.
%There are two possible reasons for a bit $c_j'$ to be $1$: because the true value was $1$ and because $w_j \neq w_j'$.  However, if a bit $c_j'$ is $0$, this likely means that $w_j=w_j'$ because collisions when $c_j=0$ are unlikely~(occurring with probability $1/|\mathcal{Z}|$).  This is the reason for the use of a code that only corrects $0\rightarrow 1$ flips.

\consref{cons:first construction} is secure if no distinguisher can tell whether it is working with random obfuscations or obfuscations of $W_j$.  By the security of point obfuscation, anything learnable from the obfuscation is learnable from oracle access to the function. Therefore, our construction is secure as long as enough blocks are unpredictable even after adaptive queries to equality oracles for individual symbols. \defref{def:block guessable} formalizes this intuition.

We show some examples of unguessable block sources in \apref{sec:characterize}.  In particular, any source $W$ where for all $j$, $\Hoo(W_j) \geq \omega(\log n)$~(but all blocks may arbitrarily correlated) is an unguessable block source~(\clref{cl:all blocks entropy}).

\consref{cons:first construction} is not a computational fuzzy extractor.  The codewords $c$ are not uniformly distributed and it is possible to learn some bits of $c$~(for the symbols of $W$ without much entropy).  However,  \consref{cons:first construction} a computational fuzzy conductor (\defref{def:comp fuzzy cond}).  
Computational fuzzy conductors can be converted to computational fuzzy extractors using standard techniques~(\lemref{lem:cond and cext}).

\begin{theorem}
\label{thm:main thm first cons}
Let $n$ be a security parameter. Let $\mathcal{Z}$ be an alphabet where $|\mathcal{Z}| \ge 2^{ \omega(\log(n))}$.
Let $\mathcal{W}$ be a family of $(q,\alpha= \omega(\log n),  \beta)$-unguessable block sources over $\mathcal{Z}^\gamma$, for any $q = \poly(n)$.  Furthermore, let $C$ be a $(\neigh_t, \delta_{code})$-code over $\mathcal{Z}^\gamma$.  Let $\mathcal{O}$ be an $\gamma$-composable VGB obfuscator for point functions with auxiliary inputs. Then for any $s_{sec} = \poly(n)$ there exists some $\epsilon_{sec}=\ngl(n)$ such that \consref{cons:first construction} is a $(\mathcal{Z}^\gamma, \mathcal{W}, \tilde{m}=H_0(C)-\beta, t)$-computational fuzzy conductor that is $(\epsilon_{sec}, s_{sec})$-hard with error $\delta_{code} + \gamma/|\mathcal{Z}|$.
\end{theorem}


\subsection{Security of \consref{cons:first construction}}
Security of \consref{cons:first construction} is similar to the security of \consref{cons:sampling}.  However, security is more complicated, the main difficulty is that the definition of block unguessable sources~(\defref{def:block guessable}) allows for  weak blocks that can easily be guessed.  This means we must limit our indistinguishable distribution to blocks that are difficult to guess.  Security is proved via the following lemma:

\begin{lemma}
\label{lem:security of cons}
Let all variables be as in \thref{thm:main thm first cons}.  For every $s_{sec} = \poly(n)$ there exists some $\epsilon_{sec} = \ngl(n)$ such that $H^{\hill}_{\epsilon_{sec}, s_{sec}}( C | P ) \geq H_0(C) - \beta$.
\end{lemma}

We give a brief outline of the proof, followed by the proof.
It is sufficient to show that there exists a distribution $C'$ with conditional min-entropy and \[\delta^{\mathcal{D}_{s_{sec}}}((C, P), (C', P))\le \ngl(n).\]  Let $J$ be the set of indices that exists according to \defref{def:block guessable}. Define the distribution $C'$ as a uniform codeword conditioned on the values of $C$ and $C'$ being equal on all indices outside of $J$.  We first note that $C'$ has sufficient entropy, because $\Hav(C' |P) = \Hav(C' | C_{J^c}) \ge \Hoo(C', C_{J^c}) - H_0(C_{J^c})  = H_0(C) - |J^c|$ (the second step is by \cite[Lemma 2.2b]{DBLP:journals/siamcomp/DodisORS08}).  It is left to show $\delta^{\mathcal{D}_{s_{sec}}}((C, P), (C', P)) \le \ngl(n)$.
%Define the distribution $X$ as follows:
%\[X_i =
%\begin{cases}
%W_i & C_i = 0\\
%R_i & C_i = 1.
%\end{cases}\]
The outline for the rest of the proof is as follows:
\begin{itemize}
\item Let $D$ be a distinguisher between $(C, P)$ and $(C', P)$. Since $P$ is a collection of obfuscated programs, there exists a simulator $S$~(outputting a single bit), such that $\Pr[D(C, P)=1]$ is close to $\Pr[S^{\mathcal{O}}(C)=1]$.
\item Show that even an unbounded $S$ making a polynomial number of queries to the stored points cannot distinguish between $C$ and $C'$.  That is, $\sd(S^{\mathcal{O}}(C),S^{\mathcal{O}}(C'))$ is small.
\item By the security of obfuscation, $\Pr[S^{\mathcal{O}}(C')=1]$ is close to $\Pr[D(C', P)=1]$.
\end{itemize}
\begin{proof}[Proof of \lemref{lem:security of cons}]
\label{app:security of main cons}

Let $\mathcal{O}$ be a $\gamma$-composable VGB obfuscator with auxiliary input for point programs over $\mathcal{Z}$.  Let $W$ be a $(q, \alpha = \omega(\log n), \beta)$-unguessable block source.  Our goal is to show that for all $s_{sec} = \poly(n)$ there exists $\epsilon_{sec} =\ngl(n)$ such that $H^{\hill}_{\epsilon_{sec}, s_{sec}}(C|P)\geq H_0(C)- \beta$. % for $\epsilon' = 2\epsilon_{obf} + (\gamma - \beta)2^{-(\alpha-1)}$.
Suppose not, that is suppose there is some $s_{sec} = \poly(n)$ such that exists $\epsilon_{sec} = \poly(n)$ and $H^{\hill}_{\epsilon_{sec}, s_{sec}}(C|P) < H_0(C)-\beta$.
By \defref{def:block guessable} there exists a set of indices $J$ such that all blocks within $J$ are unguessable.  Define by $C'$ the distribution of sampling a uniform codeword where all locations outside $J$ are fixed.  Then
$\Hav(C' | C_{J^c}) \ge \Hoo(C', C_{J^c}) - H_0(C_{J^c})  = H_0(C) - \beta$ (by \cite[Lemma 2.2b]{DBLP:journals/siamcomp/DodisORS08}).

Let $D$ a distinguisher of size at most $s_{sec}$ such that
\[
| \expe[D(C, P)] - \expe[D(C', P)] > \epsilon_{sec} = 1/\poly(n).
\]
Define the distribution $X$ as follows:
\[X_j =
\begin{cases}
W_j & C_j = 0\\
R_j & C_j = 1.
\end{cases}\]  By the security of obfuscation~(\defref{def:obf}), there exists a unbounded time simulator $S$~(making at most $q$ queries) such that
\begin{align}
\label{eq:dist before first}
|\expe [D(P_1,..., P_\gamma, C)] - \expe [S^{I_X(\cdot, \cdot)}(C, 1^{\gamma \log |Z|})] |\leq \epsilon_{sec}/3.
\end{align}
We now prove $S$ cannot distinguish between $C$ and $C'$.
\begin{lemma}
\label{lem:sim cannot distinguish}
$\sd(S^{I_X(\cdot, \cdot)}(C, 1^{\gamma \log |Z|}), S^{I_X(\cdot, \cdot)}(C', 1^{\gamma \log |Z|})) \le (\gamma-\beta) 2^{-(\alpha+1)}$.
\end{lemma}

\begin{proof}
\noindent It suffices to show that for any two codewords that agree on $J^c$, the statistical distance is at most $(\gamma-\beta)2^{-(\alpha+1)}$.
\begin{lemma}
\label{lem:codewords in I close}
Let $c^*$ be true value encoded in $X$ and let $c'$ a codeword in $C'$.  Then,
\[
\sd( S^{I_X(\cdot, \cdot)}(c^*, 1^{\gamma \log |Z|}), S^{I_X(\cdot, \cdot)}(c', 1^{\gamma \log |Z|})) \le ( \gamma -\beta) 2^{-(\alpha+1)}.
\]
\end{lemma}
\begin{proof}
Recall that for all $j\in J$, $\Hav(W_j | View(S))\geq \alpha$.  The only information about the correct value of $c_j^*$ is contained in the query responses.  When all responses are $0$ the view of $S$ is identical when presented with $c^*$ or $c'$.  We now show that for any value of $c^*$ all queries on $j \in J$ return $0$ with probability $1-2^{-\alpha+1}$.  Suppose not, that is suppose, the probability of at least one nonzero response on index $j$ is $> 2^{-(\alpha+1)}$.  Since $w, w'$ are independent of $r_j$, the probability of this happening when $c^*_j = 1$ is at most $q/\mathcal{Z}$ or  equivalently $2^{-\log |\mathcal{Z}|+\log q}$.  Thus, it must occur with probability:
\begin{align}
2^{-\alpha+1}&<\Pr[\text{non zero response location }j]\nonumber \\
 &= \Pr[c_j^* =1]\Pr[\text{non zero response location }j\wedge c_j^*=1]\nonumber \\&+ \Pr[c_j^*=0] \Pr[\text{non zero response location }j \wedge c_j^*=0]\nonumber \\
&\le 1\times 2^{-\log|\mathcal{Z}|+\log q} + 1\times  \Pr[\text{non zero response location }j \wedge c_j^*=0] \label{eq:ways to remove ent}
\end{align}
We now show that for an unguessable block source the remaining entropy $\alpha\leq \log |\mathcal{Z}|-\log q $:
\begin{claim}
\label{cl:ent bounded away from n}
If $W$ is a $(q, \alpha, \beta)$-block unguessable source over $\mathcal{Z}$ then $\alpha \le \log |\mathcal{Z}|-\log q$.
\end{claim}
\begin{proof}
Let $W$ be a $(q, \alpha, \beta)$-block unguessable source.  Let $J\subset\{1,..., \gamma\}$ the set of good indices.
It suffices to show that there exists an $S$ making $q$ queries such that for some $j\in J, \Hav(W_j | S^{I_{W}(\cdot, \cdot)})\le \log |\mathcal{Z}| - \log q$.  Let $j\in J$ be some arbitrary element of $J$ and denote by $w_{j,1}, ..., w_{j,q}$ the $q$ most likely outcomes of $W_j$~(breaking ties arbitrarily).  Then $\sum_{i=1}^q \Pr[W_j = w_{j,i}]\geq q/|\mathcal{Z}|$.  Suppose not, this means that there is some $w_{j,i}$ with probability $\Pr[W_j = w_{j,i}] < 1/|\mathcal{Z}|$.  Since there are $\mathcal{Z} - q $ remaining possible values of $W_j$ for their total probability to be at least $1-q/|\mathcal{Z}|$ at least of these values has probability at least $1/\mathcal{Z}$.  This contradicts the statement $w_{j,1},..., w_{j,q}$ are the most likely values.  Consider $S$ that queries its oracle on $(j, w_{j,1}),.., (j, w_{j,q})$.  Denote by $Bd$ the random variable when $W_j\in \{w_{j,1},.., w_{j,q}\}$  After these queries the remaining min-entropy is at most:
\begin{align*}
\Hav(W_j | S^{J_W(\cdot, \cdot)}) &=  -\log \left(\Pr[Bd=1]\times 1+ \Pr[Bd=0]\times \max_{w}\Pr[W_j = w| Bd =0]\right)\\
&\leq  -\log \left(\Pr[Bd=1]\times 1\right)\\
&=-\log\left( \frac{q}{|\mathcal{Z}|} \right) = \log|\mathcal{Z}|-\log q
\end{align*}
This completes the proof of \clref{cl:ent bounded away from n}.
\end{proof}
\noindent
Rearranging terms in Equation~\ref{eq:ways to remove ent}, we have:
\begin{align*}
 \Pr[\text{non zero response location }j \wedge c_j=0] &>2^{-\alpha+1} - 2^{-(\log |\mathcal{Z}|-\log q)}=  2^{-\alpha}
 \end{align*}
 When there is a $1$ response and $c_j=0$ this means that there is no remaining min-entropy.  If this occurs with over $2^{-\alpha}$ probability this violates the block unguessability of $W$~(\defref{def:block guessable}).  By the union bound over the indices $j\in J$ the total probability of a $1$ in $J$ is at most $(\gamma-\beta)2^{-\alpha+1}$. Recall that $c^*, c'$ match on all indices outside of $J$. Thus, for all $c^*, c'$ the statistical distance is at most $(\gamma- \beta)2^{-\alpha+1}$.  This concludes the proof of \lemref{lem:codewords in I close}.
\end{proof}
\noindent
By averaging over all points in $C'$ we conclude that \[\sd(S^{I_X(\cdot, \cdot)}(C, 1^{\gamma \log |Z|}), S^{I_X(\cdot, \cdot)}(C', 1^{\gamma \log |Z|})) < (\gamma -\beta)2^{-(\alpha+1)}.\]  This completes the proof of \lemref{lem:sim cannot distinguish}.
\end{proof}

\noindent Now by the security of obfuscation we have that
\begin{align}
\label{eq:dist after first}
|\expe [D(P_1,..., P_\gamma, C') ]- \expe [S^{I_X(\cdot, \cdot)}(C', 1^{\gamma \log |Z|})] |\leq \epsilon_{sec}/3.
\end{align}
Combining Equations~\ref{eq:dist before first} and~\ref{eq:dist after first} and \lemref{lem:sim cannot distinguish}, we have
\begin{align*}
\delta^{D}(( P, C), (P, C'))&\leq |\expe [D(P_1,..., P_\gamma, C)] - \expe [S^{I_X(\cdot, \cdot)}(C, 1^{\gamma \log |Z|})]| \\
&+|\expe[S^{I_X(\cdot, \cdot)}(C, 1^{\gamma \log |Z|})] - \expe[S^{I_X(\cdot, \cdot)}(C', 1^{\gamma \log |Z|})] |\\
&+|\expe [S^{I_X(\cdot, \cdot)}(C', 1^{\gamma \log |Z|})] - \expe [D(P_1,..., P_\gamma, C') ]|\\
&\leq \epsilon_{sec}/3+ (\gamma-\beta)2^{-(\alpha-1)}+\epsilon_{sec}/3 \\
&\leq 2\epsilon_{sec}/3 + \ngl(n) < \epsilon_{sec}.
\end{align*}
This is a contradiction and completes the proof of \lemref{lem:security of cons}.
\end{proof}

\subsection{Correctness of \consref{cons:first construction}}
We now argue correctness of \consref{cons:first construction}.
We begin by showing that the probability of a single $1\rightarrow 0$ bit flip in $c$ is negligible.
%Most of the effort in showing the correctness of \consref{cons:first construction} is showing that $\dis(w, w')\leq t$ implies $\dis(c, c')\leq t$.  However, we start by justifying our use of unidirectional codes.
\begin{lemma}
\label{lem:no 1 to 0 flips}
Let all variables be as in \thref{thm:main thm first cons}.
The probability of at least one $1\rightarrow 0$ bit flip~(an obfuscation of a random block being interpreted as the obfuscation of the point) is $ \le \gamma/|\mathcal{Z}| = \ngl(n)$.
\end{lemma}
\begin{proof}
Consider a coordinate $j$ for which $c_j=1$. Since $w'$ is chosen independently of the points $r_j$, and $r_j$ is uniform, $\Pr[r_j =w_j']  = 1/|\mathcal{Z}|$. The lemma follows by the union bound, since there are at most $\gamma$ such coordinates.
\end{proof}

Since there are most $t$ locations for which $w_j\neq w_j'$ there are at most $t$ $0\rightarrow 1$ bit flips in $c$, which the code will correct with probability $1-\delta_{code}$, because $c$ is chosen independently of $w'$.
Therefore, \consref{cons:first construction} is correct with error at most $\gamma/|\mathcal{Z}|$.

\paragraph{More errors than entropy?}
\label{sec:discussion}
In this section, we show that \consref{cons:first construction} can support distributions with more errors than entropy.
We first calculate the size of the Hamming ball.
\[
\log |B_t| = \log \sum_{i=0}^t {\gamma \choose i} (|\mathcal{Z}|-1)^i> \log {\gamma \choose t} (|\mathcal{Z}|-1)^t =\Theta(t\log |\mathcal{Z}|) + \log {\gamma\choose t}
\]
The simplest type of unguessable block source is where each block is independent and has super-logarithmic entropy~(\clref{cl:independent high ent}).  For this type of source the entropy is $\Hoo(W) = \gamma\omega(\log n)$.  This yields:
\[
\text{\# errors} - \text{entropy} = \log |B_t| -  \Hoo(W)  >\left( \Theta(t\log |\mathcal{Z}|) + \log {\gamma \choose t}\right) -  \gamma \omega(\log n) .
\]
When $t =\Theta(\gamma)$ and the entropy of each block is $o(\log |\mathcal{Z}|)$, then the construction supports more errors than entropy. Furthermore, the output entropy is $H_0(C) -\beta$~(if $C$ is a constant rate code, this is $\Theta(\gamma)$).

\paragraph{Improvements}  If most codewords have Hamming weight close to $1/2$, we can decrease the error tolerance needed from the code from $t$ to  about $t/2$, because roughly half of the mismatches between $w$ and $w'$ occur where $c_j =1$.

If $\gamma$ is not long enough to get a sufficiently long output, the construction can be run multiple times with the same input and independent randomness.



