
%%%%%%%%%%%%%%%%%%%%%%% file typeinst.tex %%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is the LaTeX source for the instructions to authors using
% the LaTeX document class 'llncs.cls' for contributions to
% the Lecture Notes in Computer Sciences series.
% http://www.springer.com/lncs       Springer Heidelberg 2006/05/04
%
% It may be used as a template for your own input - copy it
% to a new file with a new name and use it as the basis
% for your article.
%
% NB: the document class 'llncs' has its own and detailed documentation, see
% ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\documentclass[runningheads,a4paper]{llncs}



\usepackage{amssymb}
\setcounter{tocdepth}{3}
\usepackage{graphicx}


\usepackage{bm}% bold math
\usepackage{amsmath}
\usepackage{latexsym}
\usepackage{epsfig}
\usepackage{amsbsy}
\usepackage{array}
\usepackage{setspace}
\usepackage{slashbox}
\usepackage{ctable, dashrule}
\usepackage{color}


\newcommand{\wuhao}{\fontsize{6pt}{\baselineskip}\selectfont}

\usepackage{amsmath} % assumes amsmath package installed
\usepackage{amssymb}  % assumes amsmath package installed
\DeclareMathOperator*{\argmax}{argmax}
\usepackage{caption}
\captionsetup{font={scriptsize}}

%{\noindent \(\Box\)\par\addvspace{\medskipamount}}

\renewenvironment{proof}{\addvspace{\medskipamount}\noindent{\em Proof. }}%
{\rightline { $\Box$ }\par\addvspace{\medskipamount}}
\newenvironment{proof-sketch}{\addvspace{\medskipamount}\noindent{\em Proof Sketch. }}%



\usepackage{arydshln}

% \renewcommand{\arraystretch}{1.8}


\newcommand{\tablepage}[2]{\begin{minipage}{#1}\vspace{0.5ex} #2 \vspace{0.5ex}\end{minipage}}
\newcommand{\ignore}[1]{}
\newcommand{\eps}{\varepsilon}
\newcommand{\R}{\mathcal{R}}
\newcommand{\weak}{\mathcal{W}eak}
\newcommand{\block}{\mathcal{B}lock}
\newcommand{\sv}{\mathcal{SV}}
\newcommand{\bcl}{\mathcal{BCL}}
\newcommand{\Ext}{\textsf{Enc}}
%\mathsf{Ext}}
\newcommand{\Enc}{\textsf{Enc}}
\newcommand{\sd}{\textsf{SD}}
\newcommand{\rd}{\textsf{RD}}
\newcommand{\bit}{\{0,1\}}
\newcommand{\br}{\mathbf{r}}

\newcommand{\supp}{\mathop{\mbox{supp}}}


\usepackage{cite}
\def\citeBreaks{\do\A\do\[\do\\\do\]\do\^}


\usepackage{url}



\usepackage[T1]{fontenc}
\usepackage{subfig}

\captionsetup{belowskip=-20pt,aboveskip=-20pt}

\urldef{\mailsa}\path|yaoyanqing1984@gmail.com, lizj@buaa.edu.cn|
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}


\newcommand{\mypar}[1]{\vspace{4pt} \noindent {\sc #1}\ }


\begin{document}

\mainmatter  % start of an individual contribution

% first the title is needed
\title{Achieving Differential Privacy with Realistic Imperfect Randomness}
%\title{On the Impossibility of Privacy and Differential Privacy with Imperfect Randomness}

% a short form should be given in case it is too long for the running head
\titlerunning{ Achieving Differential Privacy with Realistic Imperfect Randomness}

% the name(s) of the author(s) follow(s) next
%
% NB: Chinese authors should write their first names(s) in front of
% their surnames. This ensures that the names appear correctly in
% the running heads and the author index.
%
\author{ Yanqing Yao$^{a,b}$,  Zhoujun Li$^{a,c}$ }
%
\authorrunning{Y.Q. Yao, Z.J. Li}
% (feature abused for this document to repeat the title also on left hand pages)

% the affiliations are given next; don't give your e-mail address
% unless you accept that it will be published
\institute{
$^a$State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China\\
$^b$Department of Computer Science, New York University, New York 10012, USA\\
$^c$Beijing Key Laboratory of Network Technology, Beihang University, Beijing, China\\
\mailsa\\
}

%
% NB: a more complex sample for affiliations and the mapping to the
% corresponding authors can be found in the file "llncs.dem"
% (search for the string "\mainmatter" where a contribution starts).
% "llncs.dem" accompanies the document class "llncs.cls".
%

\toctitle{Lecture Notes in Computer Science}
\tocauthor{Authors' Instructions}



% \begin{titlepage}
%\def\thepage{}


 \maketitle
\vspace{-4ex}

\begin{abstract}

We revisit the question of achieving differential
privacy with realistic imperfect randomness.
In the design of differentially private mechanisms, it's usually assumed that uniformly random source is available.
However, in many situations it seems unrealistic, and one must deal with various imperfect random sources.
 Dodis et al. (CRYPTO'12) proposed that
differential privacy can be achieved with Santha-Vazirani (SV)  source via adding a stronger property called
SV-consistent sampling and left open the question if
differential privacy is possible with more realistic (i.e., less structured) sources than  SV source. Bias-Control Limited (BCL) source, introduced by Dodis (ICALP'01), as  a generalization of the SV source and  sequential bit-fixing source,  is more realistic.
Unfortunately, if we  nationally expand SV-consistent sampling to the BCL source, the expansion is  hopeless to achieve differential privacy. One main reason is that SV-consistent sampling requires ``consecutive'' strings, while  some strings can't be generated from ``non-trivial'' BCL source.

\vspace{1ex}
Motivated by this question, we introduce a new appealing property, called compact BCL-consistent sampling, the degeneration of which is different from SV-consistent sampling proposed by Dodis et al. (CRYPTO'12). We prove that if the BCL source satisfies this property,  then the
mechanism is  differentially private.    Even if the BCL source is degenerated into the SV-source, our proof is much more intuitive and simpler than that of  Dodis et al. (CRYPTO'12).  Further, we construct  explicit mechanisms using
a new truncation technique as well as  arithmetic coding.  We also propose its concrete results  for differential privacy and accuracy.   
While the results of \cite{DY14} imply that if there {\slshape exist} differentially private mechanisms for imperfect randomness, then some parameters should have
some constraints,  ours show  {\slshape explicit} construction of such mechanisms whose parameters  match the prior constraints.





 \ignore{
 An extra fruit is that the precision of the specific mechanism of Dodis et al. [CRYPTO'12] can be modified via this technique such that it becomes a   SV-consistent sampling mechanism.
}
% \keywords{ Differential Privacy; realistic imperfect randomness; Santha-Vazirani source; Bias-Control Limited  Source; Consistent Sampling;  Finite-Precision Mechanism }
\end{abstract}

%\end{titlepage}

\vspace{-4ex}
% \setcounter{page}{1}
\section{Introduction}





Traditional cryptographic
models take for granted  the availability of perfect randomness, i.e., sources that output unbiased and independent
random bits. However, in many settings  this assumption  seems unrealistic, and one must deal with various imperfect sources of randomness. Some well known
examples of such imperfect random sources are
physical sources, biometric data,
secrets with partial leakage, and group elements from Diffie-Hellman key exchange.
To abstract this concept, several formal models of realistic imperfect sources have been described. Please see \cite{DY14} for a summary. Roughly speaking, they can be divided into extractable and non-extractable. Extractable sources  allow for deterministic extraction of nearly perfect randomness. Moreover, while the question of optimizing the extraction rate and efficiency has been very interesting, from the qualitative perspective such sources are good for any application where perfect randomness is sufficient. Unfortunately,  it was quickly realized that many realistic sources are non-extractable~\cite{SV86,CG88,D01}. The simplest example  is Santha-Vazirani (SV) source~\cite{SV86},  which produces an infinite sequence of  bits $r_1, r_2,  \hdots$, with the property
that $\Pr[r_i=0\mid r_1\ldots r_{i-1}] \in [\frac{1}{2} ( 1 - \gamma), \frac{1}{2} ( 1 + \gamma)]$, for any setting of the prior bits $r_1,  \ldots, r_{i-1}$. Santha and Vazirani~\cite{SV86} showed that there exists no deterministic
extractor $\Ext: \bit^n\rightarrow \bit$ capable of extracting even a {\em single} bit of bias {\em strictly} less than $\gamma$ from the $\gamma$-SV source, irrespective of how many SV bits $r_1,  \ldots, r_n$ it is willing to wait for.

\ignore{

Despite this pessimistic result, ruling out the ``black-box compiler'' from perfect to imperfect (e.g., SV) randomness for {\em all} applications, one may still hope that specific ``non-extractable'' sources, such as SV-sources, might  be sufficient for {\em concrete} applications, such as simulating probabilistic algorithms or cryptography. Indeed, a series of results~\cite{VV85,SV86,CG88,Zuc96,ACRT99} showed that very ``weak'' sources
(including SV-sources and even much more realistic ``weak'' sources) are sufficient for simulating probabilistic polynomial-time
algorithms; namely, for problems which do not inherently need randomness, but which could potentially
be sped up using randomization. Moreover, even in the area of cryptography --- where randomness is {\em essential} (e.g., for key generation) --- it turns out that many ``non-extractable''  sources (again, including SV sources and more) are sufficient for {\em authentication} applications, such as the designs of MACs~\cite{MW97,DKRS06} and even signature schemes~\cite{DOPS04,ACMPS14} (under appropriate hardness assumptions). Intuitively, the reason for the latter ``success story'' is that authentication applications only require that it is hard for the attacker to completely guess (i.e., ``forge'') a certain long string, so having (min-)entropy in our source should be sufficient to achieve this goal.


}

Despite this pessimistic result, ruling out the ``black-box compiler'' from perfect to imperfect (e.g., SV) randomness for {\em all} applications, people still  hope that specific ``non-extractable'' sources (e.g., SV sources), are  sufficient for {\em concrete} applications. Indeed, there are already a series of positive results for simulating  probabilistic  polynomial-time algorithms \cite{VV85,SV86,CG88,Zuc96,ACRT99} and {\em authentication} applications \cite{MW97,DOPS04,DKRS06,ACMPS14}.
Unfortunately, the situation appears to be much less bright when dealing with {\em privacy} applications, such as encryption, commitment, zero-knowledge, and some others.  Please see \cite{DLMV12,DY14} for a  survey.
 \ignore{  First, McInnes and Pinkas~\cite{MP90} showed that unconditionally secure symmetric encryption cannot be based on SV sources, even if one is restricted to encrypting a single bit. This result was subsequently strengthened by Dodis et al.~\cite{DOPS04}, who showed that SV sources are not sufficient for building even computationally secure encryption (again, even of a single bit), and, in fact, essentially any other cryptographic task involving ``privacy'' (e.g., commitment, zero-knowledge, secret sharing and others). This was again strengthened by Austrin et al.~\cite{ACMPS14}, who showed that the negative results still hold even if SV source is efficiently samplable.  Bosley and Dodis~\cite{BD07} showed an even more negative result: if a source of randomness $\R$ is ``good enough'' to generate a secret key capable of encrypting $k$ bits, then one can deterministically extract nearly $k$ almost uniform bits from $\R$, suggesting that traditional privacy {\em requires} an ``extractable'' source of randomness.  }
While a series of negative results seem to strongly point in the direction that privacy inherently requires extractable randomness, a recent work of Dodis et al.~\cite{DLMV12} put a slight dent into this consensus, by showing that SV sources are provably sufficient for achieving a more recent notion of privacy, called {\em differential privacy} (DP)~\cite{DMNS06}.



The  motivating scenario of differential privacy is a statistical database. The purpose of
a privacy-preserving statistical database is to enable the user to learn released  statistical facts without
compromising the privacy of the individual users whose data is in the database.
Differential privacy ensures the removal or addition of a single database item does not
(substantially) affect the outcome of any analysis \cite{Dwo08}.   More formally,
a differentially private mechanism $M(D, \br)$ uses its randomness $\br$ to ``add enough noise'' to the true answer $f(D)$, where $D$ is some sensitive database of users, and $f$ is some useful aggregate information (query) about the users of $D$.
  On one hand, to preserve  individual users' privacy, we want  $M$ to satisfy $\xi$-differential privacy, that is, for any neighboring databases $D_1$ and $D_2$ (i.e., $D_1$ and $D_2$  differ on a single record), and for any possible output $z$,
$ e^{-\xi}  \leq \Pr \limits_{ \mathbf{r}} [M(D_1, f; \mathbf{r})=z] / \Pr \limits_{ \mathbf{r}} [M(D_2, f; \mathbf{r})=z]  \leq e^{\xi}$ for
small $ \xi > 0$.  On the other hand,  to keep the  utility (or accuracy) of $M$, we hope  the expected value
 of $|f(D)-M(D, f; \mathbf{r})|$ over random  $\mathbf{r}$ to be as small as possible. Usually, we should make a tradeoff between differential privacy and utility.






Additive-noise mechanisms \cite{DMNS06,GRS09,HT10}  have the form $M(D, f; \mathbf{r}) \\= f(D) +  X(\mathbf{r})$,   where
 $X$ is an appropriately chosen ``noise'' distribution added to guarantee $\xi$-DP.  For instance, for counting queries,   the right distribution is the Laplace distribution \cite{DMNS06}.
\ignore{
 Dwork et al. \cite{DMNS06} designed a private and accurate  mechanism
$M(D, f; \mathbf{r}) = f(D) + \lfloor X(\mathbf{r}) \rceil$,   where $X(\mathbf{r})$ is the Laplace distribution.   This  mechanism can be approximated by a finite precision  one.    As it turned out, a good
enough approximation can be accomplished by sampling each value $ z \in \mathbb{Z}$   with precision roughly  proportional to
proportional to Pr[z] }
However, {\slshape we can not generate a ``good enough'' sample of the
Laplace distribution  with SV sources.}  In
fact, any accurate and private additive-noise mechanism for a source $R$ implies the existence of a
randomness extractor for $R$, essentially collapsing the notion of differential privacy to that of traditional
privacy, and showing the impossibility of accurate and private additive-noise mechanisms for
SV sources \cite{DLMV12}.  From another perspective,  an additive-noise mechanism must satisfy $T_1 \cap T_2 = \emptyset$,
based on which an SV adversary can always succeed in amplifying the ratio $\Pr[\mathbf{r} \in  T_1] / \Pr[\mathbf{r} \in  T_2]$ (see \cite{DLMV12}), or $ | \Pr[\mathbf{r} \in  T_1] -\Pr[\mathbf{r} \in  T_2]  |$ (see \cite{DY14}),
where $T_i$ is the set of coins $ \mathbf{r}$ with $M(D_i, f;  \mathbf{r})=z$.

Dodis et al. \cite{DLMV12} observed a necessary condition, called  consistent sampling (i.e., informally, $|T_1 \cap T_2| \approx |T_1 |  \approx |T_2|$), to  build SV-robust mechanisms.
\ignore{ This property is similar to  ``consistent sampling'' \cite{Man94,Hol07}
 with applications in differential privacy \cite{MMP$^+$10}, web search  \cite{BGMZ97} and parallel repetition theorems \cite{Hol07}, among others. }
 They also introduced another condition to match the bit-by-bit property of SV sources.  The combination of the above two conditions
is called SV-consistent sampling. They builded a concrete accurate and private Laplace mechanism by using some truncation and arithmetic coding
techniques.  Such a mechanism  is capable to work with all such distributions, provided that the utility $\rho$ is now relaxed to be polynomial of $1/\eps$, whose degree and coefficients depend on $\delta$, but {\em not} on the size of the database $D$.
Coupled with the impossibility of traditional privacy with SV sources, this result suggested a qualitative gap between traditional and differential privacy, but left the following open problem.




\mypar{ \underline{Open Question.}}  {\slshape  Is  differential privacy  possible with more realistic (i.e., less structured) sources than SV sources? }


Dodis et al. \cite{D01} introduced more realistic  source, called Bias-Control Limited (BCL) source, which  generates a sequence of bits
 $x_1,  \ldots, x_n$, where for $i =1,  \ldots, n$, the value  of $x_i$
can depend on  $x_1,  \ldots,  x_{i-1}$ in one of the following two ways: (A) $x_i$ is determined by $x_1, \ldots, x_{i-1}$, but this happens for at most $b$ bits, or (B)  $  \frac{1- \delta}{2}  \leq   \Pr[x_i=1 \mid x_1,  \ldots, x_{i-1}]  \leq \frac{1+\delta}{2}$, where $ 0 \leq \delta < 1$. (See Definition \ref{DFBCL}.)  In particular, when
$b=0$, it degenerates into  SV source of \cite{SV86}; when $\delta =0$, it yields the bit-fixing source  of \cite{LLS89}; when $b=0$ and  $\delta =0$, it corresponds to the
perfect randomness. If $b \neq 0$ and  $\delta \neq 0$, we say the BCL source is non-trivial.   The BCL source models the problem that each of the bits produced by a
streaming source is unlikely to be perfectly random: slight errors (due to noise, measurement errors, and imperfections)
of the source are inevitable,  and  the situation that some of the bits could have non-trivial dependencies on the previous bits (due to internal
correlations, poor measurement or improper setup), to the point of being completely determined by them.


Hence, compared with SV source, the  BCL source appears much more realistic, especially if the number of interventions $b$ is somewhat moderate.
Indeed, since it naturally (and realistically!) relaxes SV source, for which non-trivial differential privacy is possible, it will be interesting to see whether
existing results can be expanded using BCL sources (especially for reasonably high $b$  raised by  Dodis \cite{D14}).
\ignore{
Unfortunately, SV-consistent sampling is useless to BCL source as some strings can't be generated from BCL source, which 
violates the ``interval'' property in  SV-consistent sampling. }
Recently, Dodis and Yao \cite{DY14} have shown an impossibility result for BCL source: when $b \geq  \Omega( \log( \xi \rho )/ \delta)$, it's impossible to achieve  $( \mathcal{BCL}(\delta, b), \xi)$-differentially private and $(\mathcal{U},  \rho)$-accurate mechanism for Hamming weight queries. In other words, if  there exists a  $( \mathcal{BCL}(\delta, b), \xi)$-differentially private and $(\mathcal{U},  \rho)$-accurate mechanism for Hamming weight queries, then $b \leq O(\log( \xi \rho )/ \delta)$.  This result gives us a bit hope to
design differentially private and accurate mechanisms for some $b$.


% BCL source relates to the study of ``discrete control processes'' and the problem of coin-flipping.

\mypar{Our Results and Techniques.}



We try to naturally expand SV-consistent sampling to BCL-consistent sampling, but can't get optimistic results.
\ignore{  Due to space limitations, we omit the concrete pessimistic  results. }
It's not surprising,  as the ``interval'' property (see Definition \ref{SVCS}) is crucial to achieve SV-differential privacy, while the mechanism based
on $\mathcal{BCL}(\delta, b)$ with $b \neq 0$ can't be an interval one.

Essentially, to achieve differential privacy,
we need to restrict  $ \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1   \backslash T_2 ] / \Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} \in T_2] $.  Similar to \cite{DLMV12}, consistent sampling  is still  a   necessary  condition for building BCL-robust, differentially private mechanisms.  From the generation  procedure of $BCL(\delta, b, n)$,
we can upper bound the numerator and  lower bound the denominator  by introducing the common prefix $ \mathbf{u} $ of $ T_1$  and $T_2$.
Instead of limiting  $|\textmd{SUFFIX} ( \mathbf{u}, n)  | / |T_1 \cup T_2|= 2^{n-|\mathbf{u}| } / |T_1 \cup T_2|$  as in \cite{DLMV12}, we limit $n-|\mathbf{u}|$. The concept of compact BCL-consistent sampling (Definition \ref{BCLCSCON}) emerges from this motivation.  

However, we are confronted with some difficulties to construct explicit differentially private mechanisms.
According to the method of yielding finite precision mechanisms in \cite{DLMV12}, we can't upper bound $n-|\mathbf{u}|$ as a constant! To solve this problem, we find a new truncation trick and hence design a new  mechanism (see  Section 4.1).  Our contributions are as follows.







\ignore{

Aiming at negative results about differential privacy, Dodis and Yao \cite{DY14}  considered the simplest concrete example of differential privacy, where a ``record'' $D(i)$ is a single bit, and $q$ is the Hamming weight $wt(D)$ of the corresponding bit-vector $D$ (i.e., $wt(D) = \sum D(i)$). In this case, a very simple $\eps$-DP mechanism~\cite{DMNS06}  $M(D,\br)$ would simply return $wt(D)+e(\br)$ (possibly truncated to always be between $0$ and $|D|$), where $e(\br)$ is an appropriate noise\footnote{So called Laplacian distribution, but the details do not matter here.} with $\rho= \mathbb{E}[|q(\br)|] \approx 1/\eps$. Intuitively, this setting ensures that when the value $D(i)$ changes from $0$ to $1$, the answer distribution $M(D, \br)$ does not ``change'' by more than $\eps$.

}

\vspace{-1ex}

{\slshape
% In this paper, we propose a differentially private  and accurate mechanism based on BCL sources.
\begin{itemize}

\item   We introduce a new  concept, called compact BCL-consistent sampling (see Definition \ref{BCLCSCON}) to study differentially private mechanisms.  It should be noted that if $b=0$, the degenerated  BCL-consistent sampling is not the same as the SV-consistent sampling (see Definition \ref{SVCS}) proposed by \cite{DLMV12}.

\item   We prove that if the BCL source satisfies this property, then the corresponding mechanism is differentially private (see Theorem \ref{CSDP}).
       Even if the BCL source is degenerated into SV source, compared with \cite{DLMV12}, our proof is much more intuitive and  simpler(see  Theorem  \ref{CSDP} with $b=0$   and Theorem 4.4 of \cite{DLMV12}).

\item   We use a new truncation technique and arithmetic coding in the design of a finite-precision mechanism to satisfy compact BCL-consistent sampling (see Section 4.1).

\item  We also give rigorous proofs about differential privacy and accuracy of this kind of mechanism(Theorems \ref{bcl-dp} and \ref{bcl-u}).

\item  While the result of \cite{DY14} implies  if  there exists a $( \mathcal{BCL}(\delta, b), \xi)-$differentially private and $(\mathcal{U}, \rho)$-accurate mechanism  for the Hammimg weight queries, then the parameters should satisfy $ \rho > \frac{2^{b \cdot \log (1+ \delta) -9}}{\xi}$, we build such {\slshape explicit} mechanisms and the parameters match the above condition (Theorem \ref{spe}).

\end{itemize}
}

\ignore{Thus we have made some progress.
$  \bullet $   An extra fruit is that the precision of the specific mechanism based on Laplace distribution introduced by Dodis et al. \cite{1} can be modified via this technique such that it becomes a compact SV-consistent sampling mechanism.

\mypar{Organization.} The remainder of the paper is organized as follows. In Section 2,  we review some
notations and concepts.  In Section 3,  we introduce the concept of   compact  BCL-consistent sampling, and prove that it's sufficient to achieve differential privacy.
In Section 4, we show the concrete construction of finite-precision mechanisms, and give rigorous proofs about differential privacy and accuracy of this kind of mechanism.

}











\section{Preliminaries}


\ignore{

In this section, we present some notations and definitions that will be used later.

}

Let  $\{0, 1\}^* \overset{def}{=} \bigcup \limits_{ m \in \mathbb{Z}^+ } \{0, 1\}^m$.
We consider a distribution over $\{0, 1\}^*$ as continuously outputting (possibly correlated) bits.
We call a family   $ \mathcal{R}$ of distributions over $\{0, 1\}^*$ a source.  Denote $ \mathcal{U} $  as the uniform source, which
 is the set containing only the distribution $U$  on $\{0, 1\}^*$ that samples each bit independently and uniformly at random.
 For a set $S$, we write $U_S$ to denote the uniform distribution over $S$.
For a distribution or a random variable $R$, let $\mathbf{r} \leftarrow R$
denote the operation of sampling a random   $\mathbf{r}$  according to $R$.
For a positive integer $n$, let $[n] \overset{def}{=}  \{1, 2, \ldots, n\}$.   Denote  $ \lfloor \cdot \rceil$ as the nearest integer function.



\ignore{

\begin{definition} (\cite{D01})
\label{DFBCL}
 {\slshape   Assume that   $ 0 \leq \gamma < 1$.  The $( \gamma, b, n)$-Bias-Control Limited (BCL) source   $\mathcal{BCL}(\gamma, b, n)$  generates
$n$ bits $r_1,  \ldots,  r_n$,  where  for all $i  \in \{1, \ldots, n\}$, the value  of $r_i$
can depend on  $r_1,  \ldots,  r_{i-1}$ in one of the following two ways:
\vspace{-2mm}
\begin{itemize}
\item[(a)]  $r_i$ is determined by $r_1,  \ldots, r_{i-1}$,  but this can  happen for at most $b$ bits.  This  rule of determining a bit is called an
  intervention.
\item[(b)]  $\frac{1-  \gamma}{2}  \leq   \Pr[r_i=1 \mid r_1  r_2 \ldots r_{i-1}]  \leq \frac{1+ \gamma}{2}$.

\end{itemize}

\noindent Every distribution over $\{0, 1\}^n$ generated from    $\mathcal{BCL}(\gamma, b, n)$  is called a  $( \gamma, b, n)$-BCL distribution  BCL$(\gamma, b, n)$.


 }


\end{definition}

}




\begin{definition}(\cite{D01})
{\slshape
\label{DFBCL}
Let   $x_1, x_2, \ldots$  be a sequence of Boolean random
variables and $ 0 \leq \delta < 1$.   A probability distribution $ X  = x_1 x_2   \ldots $  over $  \{0, 1\}^* $
is a  $ (\delta, b)$-Bias-Control Limited (BCL) distribution, denoted by $BCL(\delta, b)$,  if for all $i \in \mathbb{Z}^+$ and for every string $s$ of length $ i-1$,  the value  of $x_i$
can depend on  $x_1, x_2, \ldots,  x_{i-1}$ in one of the following two ways:

(A) $x_i$ is determined by $x_1, \ldots, x_{i-1}$, but this happens for at most $b$ bits. This process of determining
a bit is called intervention.

(B)  $  \frac{1- \delta}{2}  \leq   \Pr[x_i=1   \mid  x_1  x_2 \ldots x_{i-1} =s]  \leq \frac{ 1 + \delta}{2}$.


We define the $(\delta, b)$-Bias-Control Limited source $\mathcal{BCL}(\delta, b)$ to be the set of all  $ (\delta, b)$-BCL distributions.
For a distribution $BCL(\delta, b) \in \mathcal{BCL}(\delta, b)$,  we define $BCL(\delta, b, n)$  as the distribution $BCL(\delta, b)$
restricted to the first $n$ coins $ x_1 x_2  \ldots x_n$. We let $\mathcal{BCL}(\delta, b, n)$  be the set of all distributions $BCL(\delta, b, n)$.

}


\end{definition}











\ignore{


\begin{definition}
(\cite{D01}) {\slshape
\label{DFBCL}
  Assume that   $ 0 \leq \delta < 1$.  The $(\delta, b, n)$-Bias-Control Limited (BCL) source   $\mathcal{BCL}(\delta, b, n)$  generates
$n$ bits $x_1, x_2, \ldots,  x_n$, where for $i =1, 2, \ldots, n$, the value  of $x_i$
can depend on  $x_1, x_2, \ldots,  x_{i-1}$ in one of the following two ways:

(A) $x_i$ is determined by $x_1, \ldots, x_{i-1}$, but this happens for at most $b$ bits. This process of determining
a bit is called intervention.

(B)  $  \frac{1- \delta}{2}  \leq   \Pr[x_i=1   \mid  x_1  x_2 \ldots x_{i-1}]  \leq \frac{ 1 + \delta}{2}$.

Every distribution over $\{0, 1\}^n$ generated from $\mathcal{BCL}(\delta, b, n)$ is called  a
 $(\delta, b, n)$-BCL distribution  $BCL(\delta, b, n)$.  The $(\delta, b)$-BCL source is defined as  $ \mathcal{BCL}(\delta, b) \overset{def}{=} \bigcup \limits_{n \in \mathbb{Z}^+}   \mathcal{BCL}(\delta, b,  n)$.

}


\end{definition}


}




This source models the facts that physical sources can never produce  completely perfect bits and some of the bits generated by a physical source could be determined from the previous bits.


\begin{remark}


In particular, if $b = 0$,   $ \mathcal{BCL}(\delta, b, n)$   degenerates into $\mathcal{SV}(\delta, n)$ \cite{SV86} (see Appendix \ref{SVdefinition} for the definition); if $ \delta=0$, it yields the sequential-bit-fixing source of Lichtenstein, Linial, and Saks \cite{LLS89}.   The definitions and results in the reminder of this paper can be degenerated into the counterparts for  SV and   sequential bit-fixing sources.

\end{remark}



Consider a statistical database as an array of rows from some countable set. Two databases are neighboring if they differ in exactly one row.
Let $ \mathcal{D}$ be the space of all databases. For simplicity, we only consider the query function $f:  \mathcal{D} \rightarrow \mathbb{Z}$.
Recall some concepts mentioned in \cite{DLMV12} as follows.

\begin{definition}

{\slshape  Let $\xi \geq 0$, $ \mathcal{R}$ be a source, and  $\mathcal{F} = \{f:  \mathcal{D} \rightarrow \mathbb{Z} \}$ be a
family of functions. A mechanism $M$ is $(\mathcal{R}, \xi)$-differentially private for  $\mathcal{F}$ if for all neighboring databases  $D_1, D_2 \in \mathcal{D}$, all $f \in \mathcal{F}$,  all possible outputs $z \in \mathbb{Z}$, and  all  distributions $R \in   \mathcal{R}$:
  $ \Pr \limits_{ \mathbf{r} \leftarrow R }[M(D_1, f; \mathbf{r})=z]/ \Pr  \limits_{ \mathbf{r} \leftarrow R } [M(D_2, f; \mathbf{r})=z ]  \leq 1+ \xi.$

}


\end{definition}




In what follows we employ the upper bound of the ratio of probabilities introduced in \cite{DLMV12} other than the traditional upper bound ``$e^{\xi}$''
to make later calculations a little simpler.  It's reasonable since when $ \xi \in [0, 1]$, which is the main useful range,
we have $ e^{\xi} \approx 1+ \xi$, and when $ \xi \geq 0$, we always have $1+ \xi \leq e^{\xi}$.


\begin{remark}

As  observed by Dodis et al. \cite{DLMV12}, here we assume that the randomness $  \mathbf{r}  $ as input of  the mechanism $M$  is in $ \{0, 1\}^*$, i.e.,  $M$  has at its disposal a
possibly infinite number of random bits, but for  two neighboring databases  $D_1, D_2 \in \mathcal{D}$,  query $f \in \mathcal{F}$, and  fixed outcome $z$, $M$ needs only a finite number of coins $n \overset{def}{=} \tilde{\tau}(D_1, D_2, f, z)$, where $\tilde{\tau}$ is a function,  to determine whether $M(D_1, f) = z$  and $M(D_2, f) = z$.  Furthermore, we assume that  if  $ M(D_1,f; \mathbf{r}) = z$ and  $ M(D_2,f; \tilde{\mathbf{r}}) = z$   where $ \mathbf{r},   \tilde{\mathbf{r}} \in \{0, 1\}^n$, then   providing $M$ with extra coins doesn't change the output. Namely,
for any $\mathbf{r}'$ with $\mathbf{r}$
as its prefix   and   $\tilde{\mathbf{r}}'$ with $\tilde{\mathbf{r}}$ as its prefix, we still have $ M(D_1, f;  \mathbf{r}')=z$  and $ M(D_2, f; \tilde{ \mathbf{r}}')=z$.
\end{remark}




\ignore{

\begin{remark}
\label{infinite-finite}
As has been observed by Dodis et al. \cite{DLMV12}, in this paper we assume that the  random coins  employed by $M$  is in $\{0, 1\}^*$, that is,  $M$ has at its disposal a
possibly infinite number of random bits,  but  for a fixed outcome $z  \in \mathbb{Z}$, $M$ needs only a finite number
of coins  to determine whether $M(D, f) = z$. More concretely, if $M(D, f; \mathbf{r}) =z$ is already determined from $\mathbf{r} \in \{0, 1\}^n$, then for any $ \mathbf{r}' \in \{0, 1\}^{n'}$ such that $\mathbf{r}$ is a prefix of $ \mathbf{r}'$,  $ M(D, f; \mathbf{r}') =z $ also holds.
Namely, providing $M$ with extra coins does not change its output.
\end{remark}

}



\begin{definition}
{\slshape  Let $ \rho >  0$,  $ \mathcal{R}$ be a source, and  $\mathcal{F} = \{f:  \mathcal{D} \rightarrow \mathbb{Z} \}$ be a
family of functions. A mechanism $M$ has $(\mathcal{R}, \rho)$-utility if for all databases $D \in   \mathcal{D}$, all queries $f \in \mathcal{F}$, and all distributions  $ R \in \mathcal{R}$: $ \mathbb{E}_{ \mathbf{r} \leftarrow  R} [ |M(D, f; \mathbf{r}) -  f(D)| ] \leq \rho.$

}

\end{definition}

One core problem in the area of differential privacy is to design accurate and private mechanisms.

\begin{definition}  {\slshape   We say a  function family  $\mathcal{F}$ admits accurate
and private mechanisms w.r.t. $ \mathcal{R}$  if there exists  function $g(\cdot)$  s.t.  for all $ \xi > 0$ there exists
mechanism  $M_{(\xi)}$  that is $(\mathcal{R}, \xi)$-differentially private and has $(\mathcal{R},  g( \xi))$-utility. $ \mathcal{M} =  \{ M_{(\xi)}\}$ is called  a class of accurate and private mechanisms for   $\mathcal{F}$    w.r.t.  $\mathcal{R}$.
}
\end{definition}

Though there are already some infinite additive mechanisms based on gaussian, binomial, and Laplace distributions, we must
 specify how to approximate them under finite precision in practice.  When perfect randomness
 is available, we can  simply approximate a continuous sample  within some ``good enough'' finite precision, which is omitted  in most differential privacy papers. Dodis et al.  builded  finite-precision mechanisms under imperfect randomness $\mathcal{SV}(\delta)$ \cite{DLMV12}.

\begin{definition}

 {\slshape  For query $f: \mathcal{D} \rightarrow \mathbb{Z}$,  the sensitivity of $f$ is defined as $ \Delta f  \overset{def}{=} \max \limits_{D_1, D_2} \| f(D_1) -f(D_2)\|$ for all neighboring databases $D_1, D_2 \in \mathcal{D}$. For $d \in  \mathbb{Z}^+$,  denote $\mathcal{F}_d = \{f:  \mathcal{D} \rightarrow \mathbb{Z} \mid \Delta f \leq d\}$.
  }

\end{definition}

For clarity, in this paper we only consider  the case $d = 1$. It's straightforward to extend all our results to any sensitivity
bound $d$.




\begin{definition}

 {\slshape  The  Laplace (or double exponential) distribution with mean $ \mu$ and standard
deviation $\frac{\sqrt{2}}{\varepsilon}$, denoted as $ \textsf{Lap}_{\mu, \frac{1}{\varepsilon}}$,  has  probability density function
 $\textsf{PDF}^{\textsf{Lap}}_{\mu,  \frac{1}{\varepsilon}} (x)=
 \frac{\varepsilon }{2}  \cdot e^{ - \varepsilon |x- \mu|}.$  The cumulative distribution function is given by   $ \textsf{CDF}^{\textsf{Lap} }_{\mu, \frac{1}{\varepsilon} } (x)=
 \frac{1}{2} + \frac{1}{2} \cdot  sgn (x- \mu)  \cdot (1-   e^{ - \varepsilon \cdot |x- \mu| })$.


\ignore{
\begin{equation}
\textsf{CDF}^{\textsf{Lap}}_{\mu,  \frac{1}{\varepsilon}} (x)   =
\left\{
\begin{aligned}
 \frac{1}{2} \cdot e^{ \varepsilon \cdot ( x-\mu) },~~~~ ~~if~  x <  \mu; \nonumber \\
  1 -  \frac{1}{2} \cdot e^{ - \varepsilon \cdot (x-\mu)  }, ~~if~  x \geq \mu. \nonumber \\
\end{aligned}
\right.
\end{equation}
}


%(Equivalently,  $ \textsf{CDF}^{\textsf{Lap} }_{\mu, \frac{1}{\varepsilon} } (x)=
% \frac{1}{2} + \frac{1}{2} \cdot  sgn (x- \mu)  \cdot (1-   e^{ - \varepsilon \cdot |x- \mu| })$.)

If a  random variable $X$ has this distribution, denote $ X \sim \textsf{Lap}_{\mu, \frac{1}{\varepsilon} }$ \footnote{In this paper, we only consider the case that  $ \frac{1}{\varepsilon} \in \mathbb{Z}$.}.
}

 \end{definition}









\section{ Compact BCL-Consistent Sampling and SV-Consistent Sampling   }

Dodis et al. \cite{DLMV12}  introduced   SV-consistent sampling. However, the
proof of ``SV-consistent sampling implies differential privacy'' (see Theorem 4.4 in \cite{DLMV12} for details) is complex. Moreover,  its natural expansion to BCL source is difficult and unknown  to achieve differential privacy, as the proof of Theorem 4.4 in \cite{DLMV12} depends on the fact that the values in $T_2$ (resp.  $T_1$) constitutes an interval (see Definition \ref{SVCS}), while it may not be the case for $BCL$ sources.


In this section, we introduce the  concept  of compact  $(\zeta, c)$-BCL-consistent sampling. If $b =0$, we get the concept of  compact $(\zeta', c)$-SV-consistent sampling.
Then we observe
that these concepts are sufficient to design finite-precision differentially private and  accurate mechanisms based on  BCL and SV sources.


Consider a mechanism $M$ with randomness space $\{0, 1\}^*$.  Denote $ \widetilde{T}(D_i, f, z) \\  \overset{def}{=} \{  \mathbf{r} \in \{0, 1\}^* \mid z = M(D_i, f; \mathbf{r}) \}$, where $i \in \{1, 2\}$,  as the set of all coins such that $M$ outputs $z$ when running on  two neighboring databases  $D_1$ and  $D_2$,   query $f$,  and randomness $\mathbf{r}$.   It should be noted that in our model only $n \overset{def}{=} \tilde{\tau} (D_1, D_2, f, z)$ coins need to be sampled to determine if $M(D_1, f)=z$ and $M(D_2, f)=z$. Therefore, let  $T(D_i, f, z) \overset{def}{=} \{\mathbf{r} \in \{0, 1\}^n \mid z = M(D_i, f; \mathbf{r})\}$ for $i \in \{1, 2\}$, wlog,  we  assume that $\widetilde{T}(D_i, f, z) =T(D_i, f, z)$ for $i \in \{1, 2\}$.



\ignore{
For all neighboring databases  $D_1, D_2 \in \mathcal{D}$, denote $ \widetilde{T}_i(D_i, f, z)  \overset{def}{=} \{  \mathbf{r} \in \{0, 1\}^* \mid z = M(D_i, f; \mathbf{r}) \}$, where $i \in \{1, 2\}$,   as  the set of all coins  $\mathbf{r} \in \{0, 1\}^*$ such that $M$ outputs $z$ when running on  database $D_i$, query $f$, and random coins $\mathbf{r}$.
}










For $m \in \mathbb{Z}^+$ and $\mathbf{x} =x_1, \ldots,  x_m \in \{0, 1\}^m$, let $ \textmd{SUFFIX}(\mathbf{x})  \overset{def}{=} \{  \mathbf{y} = y_1, y_2, \ldots \in \{0, 1\}^* \mid x_i = y_i~for~all~i \in [m]\}$ as the set of all bit strings  having $\mathbf{x}$ as a prefix. For $n \in \mathbb{Z}^+$ where   $n \geq m$, let $ \textmd{SUFFIX}(\mathbf{x}, n)
 \overset{def}{=} \textmd{SUFFIX}(\mathbf{x}) \cap \{0, 1\}^n$.





 Now consider two neighboring databases $D_1$ and $D_2$, $f \in \mathcal{F}$, and a possible outcome $z$.
Denote  $n \overset{def}{=} \tilde{\tau} (D_1, D_2, f, z)$.
 Let $T_1 \overset{def}{=} T(D_1, f, z)$,  $T_2 \overset{def}{=} T(D_2, f, z)$, and  $\mathbf{u} \overset{def}{=}
\argmax \{|\mathbf{u}'|  \mid
 \mathbf{u}' \in \{0, 1\}^{ \leq n  } ~and ~ T_1 \cup T_2 \subseteq \textmd{SUFFIX} ( \mathbf{u}', n)\}.$  Then the ratio is
$$  \frac{  \Pr \limits_{\mathbf{r} \leftarrow BCL (\delta, b,  n)} [ \mathbf{r} \in T_1    \backslash T_2  ]}{\Pr \limits_{ \mathbf{r} \leftarrow BCL (\delta, b,  n)} [\mathbf{r} \in T_2] } = \frac{\Pr \limits_{\mathbf{r} \leftarrow BCL (\delta, b,  n)} [ \mathbf{r} \in T_1  \backslash T_2 \mid  \mathbf{r} \in \textmd{SUFFIX} ( \mathbf{u} ) ]}{\Pr \limits_{ \mathbf{r} \leftarrow BCL (\delta, b,  n)} [ \mathbf{r} \in T_2  \mid  \mathbf{r} \in \textmd{SUFFIX} ( \mathbf{u})]}.$$ Since SV source and BCL source both generate strings bit by bit,
the calculation of the  ratio can be simplified.




Recall that the concepts of consistent sampling, interval mechanism, and SV-consistent sampling \cite{DLMV12} are as follows.

\begin{definition}  \label{cv}
 {\slshape  A  mechanism $M$ has $ \zeta$-consistent sampling
if for all potential  outputs $z \in \mathbb{Z}$, all queries $f \in \mathcal{F}$,  all neighboring databases  $D_1, D_2 \in \mathcal{D}$:
$ \frac{|T_1 \setminus T_2|}{| T_2|} \leq \zeta,$  where $ T_1 \overset{def}{=} T(D_1, f, z)$, $ T_2 \overset{def}{=} T(D_2, f, z) \neq \emptyset.$
}
\end{definition}


\begin{definition}
\label{SVCS}
 {\slshape  Let $ \tilde{c} > 1$  and $\zeta > 0$.  We say a mechanism M is an interval mechanism  if for all  $f \in \mathcal{F}$, all
 $D \in \mathcal{D}$, and all possible outcomes $z \in \mathbb{Z}$, $T \overset{def}{=} T(D,f, z)$, set $\{ \sum \limits_{i=1}^n r_i \cdot 2^{n-i} \mid r_1\ldots r_n \in T \}$ contains consecutive integers.
 An {\slshape \underline{interval}} mechanism has $(\zeta, \tilde{c})$-SV-consistent sampling if it has $\zeta$-consistent sampling and for all  $f \in \mathcal{F}$, all neighboring
databases $D_1, D_2 \in \mathcal{D}$, all possible outcomes $z \in \mathbb{Z}$, which define $T_1, T_2$, and $\mathbf{u}$ as above,  $
\frac{|\textmd{SUFFIX} ( \mathbf{u}, n)  | }{|T_1 \cup T_2|} \leq \tilde{c}$ holds.
 }

\end{definition}


Note that when $b \neq 0$,  $\mathcal{BCL}(\delta, b, n)$  can't generate all $n$-bit strings. The corresponding mechanism can't be an interval mechanism.  Though Dodis et al.  \cite{DLMV12} proposed that if $M$ has $(\zeta, \tilde{c})$-SV-consistent sampling, then   $M$  is $(\mathcal{SV}(\delta), \xi)$-differentially private. In that proof, the ``interval'' property is a basic condition, we can't follow that thought.
We resort to a new property instead.









\begin{definition}
 \label{BCLCSCON}
 {\slshape  Let $c$ be a constant and $\zeta > 0$. A mechanism is a compact  $(\zeta, c)$-BCL-consistent sampling mechanism  if it has $\zeta$-consistent sampling and for all queries $f \in \mathcal{F}$, all neighboring
databases $D_1$, $D_2 \in \mathcal{D}$, and all possible outcomes $z \in \mathbb{Z}$, which define $T_1, T_2$ and $\mathbf{u}$ as above,  we have $ n- |\mathbf{u}| \leq c.$   }

\end{definition}



\ignore{

\begin{definition}

  {\slshape  Let $c$ be a constant and $\zeta' > 0$. A mechanism is  a compact $(\zeta', c)$-SV-consistent sampling mechanism  if it has $\zeta'$-consistent sampling and for all queries $f \in \mathcal{F}$, all neighboring
databases $D_1$, $D_2 \in \mathcal{D}$, and all possible outcomes $z \in \mathbb{Z}$, which define $T_1, T_2$ and $\mathbf{u}$ as above,  we have $ n- |\mathbf{u}| \leq c.$
 }
\end{definition}



}





\begin{theorem}
\label{CSDP}
  {\slshape  If Mechanism $M$ is a  compact $(\zeta, c)$-BCL-consistent sampling mechanism for $(\delta, b)$-BCL-sources,  then $M$ is $(\mathcal{BCL}(\delta, b), \xi)$-differentially private,
where $ \xi \leq (\frac{1+ \delta }{1- \delta})^{c } \cdot  [\frac{1}{2}(1+ \delta)]^{-b}    \cdot  \zeta$.
In particular, for $\delta \in [0, 1)$, and $c=O(1)$, we have $\lim \limits_{\zeta \rightarrow 0}   (\frac{1+ \delta }{1- \delta})^{c} \cdot  [\frac{1}{2}(1+ \delta)]^{-b}    \cdot  \zeta   =0$. }


\end{theorem}

\begin{proof-sketch}
 Our goal is to upper bound
\begin{align*}
& \frac{  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1    \backslash T_2  ] }{\Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} \in T_2] }
  = \frac{ \sum \limits_{ \mathbf{r}' \in  T_1  \backslash T_2 }  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u})  \cap \{0, 1\}^n ] }{    \sum \limits_{ \mathbf{r}' \in  T_2 }  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r}  = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u})  \cap \{0, 1\}^n ]}
\end{align*}


Let $\mathbf{r}=r_1 \ldots r_n$ and $\mathbf{r}'=r'_1  \ldots r'_n$ where $r_i,  r'_i  \in \{0, 1\}$ for $ i \in [n]$. Then  for any fixed $\mathbf{r}'$, $ \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r}  = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u}) ] = \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)} [r_{ |\mathbf{u}|+1} = r'_{ |\mathbf{u}|+1} \mid  r_1 \ldots r_{|\mathbf{u}|    } =\mathbf{u}   ]  \times   \ldots  \times  \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)}  [r_n =r'_n  \mid r_1 \ldots r_{n-1} = \mathbf{u} r'_{ |\mathbf{u}|+1} \ldots r'_{n-1} ]$. Hence,    $ \Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} \in T_2]  \geq [ \frac{1}{2}(1- \delta)]^{n- | \mathbf{u}  |} \cdot |T_2|$  and $ \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1    \backslash T_2  ] \leq  [ \frac{1}{2}(1+ \delta)]^{n- | \mathbf{u}  |-b} \cdot |T_1 \backslash T_2|$, which implies the theorem.(Proof is in Appendix \ref{CSDP2}.)

\end{proof-sketch}



\begin{remark}  When $b=0$,  Theorem \ref{CSDP} holds for SV sources, while  Theorem 4.4 of \cite{DLMV12}  can be naturally expanded for BCL sources, mainly because of the ``consecutive strings'' requirement in the latter.  Further, the proof here is simpler  and more intuitive  than that  of \cite{DLMV12}  which  covers  more than two pages.

\end{remark}









\ignore{
The proof can be seen in Appendix \ref{CSDP2}.

\begin{theorem}
{\slshape  If Mechanism $M$ is a  compact $(\zeta, c)$-SV-consistent sampling mechanism, then $M$ is $(\mathcal{SV}(\delta), \xi)$-differentially private,
where $ \xi \leq (\frac{1+ \delta }{1- \delta})^c \cdot \zeta$.
In particular, for $\delta \in [0, 1)$, and $c=O(1)$, we have $\lim \limits_{\zeta \rightarrow 0}  (\frac{1+ \delta }{1- \delta})^c \cdot \zeta  =0$. }

\end{theorem}



}





\section{ Accurate and Private BCLCS Mechanisms   }


In this section, we construct  finite-precision  mechanisms that achieve compact $(\zeta, O(1))$-BCL-consistent sampling  with sensitivity 1.  We also propose that  the precision of the specific mechanism based on Laplace distribution introduced by Dodis et al. \cite{DLMV12} can be modified via this technique such that it becomes a compact SV-consistent sampling mechanism.
Then by Theorems 1 and 2, the mechanism here and the modified mechanism of \cite{DLMV12} are  $(\mathcal{BCL}(\delta, b), \xi)$-differentially private and $( \mathcal{SV}(\delta), \xi')$-differentially private, where  $\xi'$ is a specific $\xi$ by letting $b=0$.
We also show that these mechanisms have  good bound on utility  when the random sampling is generated from the BCL source.

\subsection{ Explicit Construction }
 We construct an infinite-precision mechanism, called $M^{   \textsf{CBCLCS} }_{\varepsilon}$, then modify it to a finite precision one, denoted as $ \overline{M}^{   \textsf{CBCLCS} }_{\varepsilon}$.
Recall that some  truncation method was proposed in \cite{DLMV12} in order to get a finite  mechanism,
 which leads to the non-intuitive notion of SV-consistent sampling.  However, it can't be transplanted to BCL sources.
 In this section, we develop another truncation technique.  The finite-precision mechanism is designed as follows.
\vspace{0.1cm}



 ${\textsf{Explicit~Construction~of~the~Mechanism}}$:
\vspace{0.1cm}

$ {\textsf{  Step~1}}$  {\slshape   On input any neighboring databases  $D_1, D_2 \in \mathcal{D}$,  $f \in \mathcal{F}$, the  infinite-precision mechanism $M^{   \textsf{CBCLCS} }_{\varepsilon}$ computes $f(D_1)$ and $f(D_2)$. Without  loss of generality, assume that
$f(D_1)=y$ and $f(D_2)=y-1$. $M^{   \textsf{CBCLCS} }_{\varepsilon}(D_1, f)$ (resp.   $M^{   \textsf{CBCLCS} }_{\varepsilon}(D_2, f)$)
 outputs $ z_1 \leftarrow \frac{1}{ \varepsilon } \cdot \lfloor  \varepsilon \cdot (y +  \textsf{Lap}_{0,   \frac{1}{\varepsilon} }  ) \rceil$ (resp. $ z_2 \leftarrow \frac{1}{ \varepsilon } \cdot \lfloor  \varepsilon \cdot (y -1 +  \textsf{Lap}_{0,   \frac{1}{\varepsilon}}) \rceil$).    Denote  $Z_y$ (resp.  $Z_{y-1}$) as  the output distribution
of $ M^{   \textsf{CBCLCS} }_{\varepsilon}(D_1, f)$ (resp. $ M^{   \textsf{CBCLCS} }_{\varepsilon}(D_2, f)$) using arithmetic coding (see \cite{DLMV12}).
 }

\vspace{0.1cm}


 ${\textsf{Step~2}}$  {\slshape Suppose that $y$ is $\underline{fixed}$. Let $ s_y(k) \overset{def}{=}  \textsf{CDF}^{\textsf{Lap} }_{y, \frac{1}{\varepsilon} } ( \frac{k+ \frac{1}{2}}{  \varepsilon})$ and $s_{y-1}(k) \overset{def}{=}  \textsf{CDF}^{\textsf{Lap} }_{y-1, \frac{1}{\varepsilon} } ( \frac{k+ \frac{1}{2}}{  \varepsilon})$ for all $k \in \mathbb{Z}$.  Denote $I_y(k) = [s_{y}(k-1), s_{y}(k))$ and $I_{y-1}(k) = [s_{y-1}(k-1), s_{y-1}(k))$.
Let $ \bar{s}_{y-1}(k-1)$ (resp. $ \bar{s}_{y-1}(k)$) be  $s_{y-1}(k-1)$ (resp. $s_{y-1}(k)$), rounded  to the first $n \overset{def}{=} \tau (\min(f(D_1), f(D_2)), k/\varepsilon) = \tau (y-1, k/\varepsilon)$ bits after the binary point.
 We round $s_{y}(k-1)$ (resp.  $s_{y}(k)$) to   the first  $n \overset{def}{=} \tau (y-1, k/\varepsilon)$  bits after the binary point. Assume the binary decimal representation of the rounded $s_{y}(k-1)$ (resp.  $s_{y}(k)$) is $0.r_1 r_2 \ldots r_n$ (resp.  $0.q_1 q_2 \ldots q_n$), then let
 $ \bar{s}_{y}(k-1) = 0.r_1 r_2 \ldots r_n + 0.r'_1 r'_2 \ldots r'_n$ (resp. $\bar{s}_{y}(k) = 0.q_1 q_2 \ldots q_n + 0.q'_1 q'_2 \ldots q'_n$), where  $ r'_i = 0$ for $i \in  [n-1]$, and  $r'_n=1$ (resp. $ q'_i = 0$ for $i \in  [n-1]$ and  $q'_n=1$).
Denote $ \bar{I}_{y-1}(k) = [ \bar{s}_{y-1}(k-1), \bar{s}_{y-1}(k))$ and $\bar{I}_y(k) = [\bar{s}_{y}(k-1),\bar{s}_{y}(k))$.

 }

\vspace{0.1cm}

 ${\textsf{Step~3}}$ {\slshape  Denote $\overline{Z}_y$ (resp. $\overline{Z}_{y-1}$) as the  output distribution of  $ \overline{M}^{   \textsf{CBCLCS} }_{\varepsilon}(D_1, f)$ (resp. $ \overline{M}^{   \textsf{CBCLCS} }_{\varepsilon}(D_2, f)$), which approximates  $Z_y$ (resp. $Z_{y-1}$).
 For any sequence $\mathbf{r} = r_1,  \ldots, r_n \in \{0, 1\}^n$,  the real representation of $\mathbf{r}$ is $REAL( \mathbf{r} ) \overset{def}{=} 0.r_1 \ldots r_n \in [0, 1]$.
 We obtain distribution $\overline{Z}_y$ (resp. $\overline{Z}_{y-1}$) by
 sampling a sequence of bits $\mathbf{r} \in \{0, 1\}^n$ (resp. $\mathbf{r}' \in \{0, 1\}^n$) from a  distribution BCL$(\delta, b, n)$
and outputting  $ \frac{k_1}{\varepsilon} $ (resp. $ \frac{k_2}{\varepsilon}$) where $k_1 \in \mathbb{Z}$ (resp. $k_2 \in \mathbb{Z}$) is the unique integer such that $REAL( \mathbf{r}) \in \bar{I}_y(k_1)$ (resp. $REAL( \mathbf{r}') \in \bar{I}_{y-1}(k_2)$).

}
\rightline{$ \Box $}


From the above construction, for all  $ k \in \mathbb{Z}$, we  have
 \ignore{
 $$\frac{\Pr[M^{    \textsf{CBCLCS} }_{\varepsilon}(D_1, f) = \frac{k}{ \varepsilon }  ]}{ \Pr[M^{ \textsf{CBCLCS}}_{\varepsilon}(D_2, f) = \frac{k}{ \varepsilon }  ]  }
%=\frac{\Pr[ \frac{k- \frac{1}{2}}{\varepsilon } \leq y +   \textsf{Lap}_{0, \frac{1}{ \varepsilon}}  <   \frac{k+ \frac{1}{2}}{\varepsilon }  ]}{\Pr[\frac{k- %\frac{1}{2}}{\varepsilon } \leq y -1 +   \textsf{Lap}_{0,  \frac{1}{ \varepsilon}}  <   \frac{k+ \frac{1}{2}}{\varepsilon }   ]} \\
= \frac{\Pr[ \frac{k- \frac{1}{2}}{\varepsilon } \leq    \textsf{Lap}_{y,  \frac{1}{ \varepsilon}}  <   \frac{k+ \frac{1}{2}}{\varepsilon }  ]}{\Pr[\frac{k- \frac{1}{2}}{\varepsilon } \leq   \textsf{Lap}_{y -1,  \frac{1}{ \varepsilon}}  <   \frac{k+ \frac{1}{2}}{\varepsilon }  ]}.
$$
}
$$ \frac{\Pr[ \overline{M}^{    \textsf{CBCLCS} }_{\varepsilon}(D_1, f) = \frac{k}{ \varepsilon }  ]}{ \Pr[ \overline{M} ^{ \textsf{ CBCLCS} } _{\varepsilon}(D_2, f) = \frac{k}{ \varepsilon }  ]  }=
\frac{\Pr[  \overline{Z}_y = \frac{k}{ \varepsilon}] }{\Pr[\overline{Z}_{y-1}= \frac{k}{ \varepsilon}    ]} =  \frac{ | \bar{I}_y(k) | }{| \bar{I}_{y-1}(k) |}.  $$



\begin{remark}
 It's easy to prove that $I_{y-1}(k) \cap  I_{y}(k) \neq \emptyset$.   The  set of points  $\{ s_y(k)\}_{k \in \mathbb{Z}}$  partitions the
interval [0, 1] into infinitely many intervals $\{  I_y(k) \overset{def}{=} [s_y(k-1), s_y(k)) \}_{k \in   \mathbb{Z}}$.
Similarly,  the set of points $\{ s_{y-1}(k)\}_{k \in  \mathbb{Z}}$  partitions the
interval [0, 1] into infinitely many intervals $\{  I_{y-1}(k) \overset{def}{=} [s_{y-1}(k-1), s_{y-1}(k)) \}_{k \in   \mathbb{Z}}$.

\end{remark}


\begin{remark}

Note that we can view $I_{y-1}(k)$ as having ``shifted'' $I_{y}(k)$ slightly to the right.  Hence the truncation methods for the endpoints of $I_y(k)$ and $I_{y-1}(k)$ are different in order to guarantee BCL-complete sampling.

\ignore{
 In Step 1,  on input  $D_1, D_2 \in \mathcal{D}$,  $f \in \mathcal{F}$, $M^{\textsf{CBCLCS} }_{\varepsilon}$
should find the greater value in $\{f(D_1), f(D_2)\}$. }


\end{remark}


\ignore{
\begin{remark}
   When $b=0$, the above construction degenerates into the mechanism based on SV source. Correspondingly,
   $ \overline{M}^{   \textsf{CBCLCS} }_{\varepsilon}$ will be  replaced with
$ \overline{M}^{   \textsf{CSVCS} }_{\varepsilon}$.
\end{remark}





\begin{remark}
The rounding method in \cite{DLMV12} can be replaced with the one in Step 3 here.
\end{remark}

}




\subsection{ Concrete Results for Differential Privacy and Accuracy  }

In this section, we  show that  our construction satisfies compact $(\zeta, O(1) )$-BCL-consistent sampling and hence it's differentially private.  Then, we give a relation between the result of \cite{DY14} and ours.

The lemma below is one core step to achieve consistent sampling. Though it has essentially been proved by Dodis et al. \cite{DLMV12},  there still exist some typos and the upper bound is not tight prior our work. Hence, we modify the Lemma A.1 of \cite{DLMV12} and get the following lemma. The revisions of that lemma  and  the proof of Lemma \ref{lemmaI'_y(k)} can be seen  in Appendixes  \ref{modification}  and   \ref{bcl-dp2} respectively.

\begin{lemma} \label{lemmaI'_y(k)}
  {\slshape  Denote $I'_y(k)  \overset{def}{=} I_y(k) \setminus I_{y-1}(k) = [s_y(k-1), s_{y-1}(k-1))$.  For all $y, k  \in \mathbb{Z}$ and $  \varepsilon \in (0, 1)$, we have $|I'_y(k)| /|I_{y-1}(k)|  <   e  \cdot  \varepsilon.$   }
\end{lemma}





\begin{theorem}
 \label{bcl-dp}

  {\slshape    Mechanism $\overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }$ is a
 compact $(   (  2^b+1 ) \cdot e \cdot \varepsilon,  \log (\frac{e \cdot (2^b+1) }{1-e^{-1}}) )$-BCL-consistent sampling mechanism for $(\delta, b)$-BCL sources.
Therefore,  $\overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }$ is
$ (\mathcal{U}, 2 e \cdot \varepsilon)$-differentially private and $(\mathcal{BCL}( \delta, b), \xi)$-differentially private  for  $ \xi =     (\frac{1+ \delta }{1- \delta})^{ \log ( \frac{e \cdot  ( 2^b+1  ) }{1-e^{-1}})  } \cdot (\frac{1+ \delta}{2})^{-b} \cdot  (  2^b+1 ) \cdot e \cdot \varepsilon$.


}

\end{theorem}

The detailed proof  is in Appendix \ref{bcl-dp2}, but the  idea of  the proof is below.

Denote $I'_y(k)  \overset{def}{=} I_y(k) \setminus I_{y-1}(k) = [s_y(k-1), s_{y-1}(k-1))$.
Assume that $Y$ is a distribution BCL$(\delta, b, n)$  and $ S_0 \overset{def}{=}  \{ \mathbf{r} \in \{0,  1\}^n \mid \Pr[ Y= \mathbf{r}] \neq 0 \}$.
Let $\textmd{STR }(I, n) \overset{def}{=} \{ \textbf{r} \in \{0, 1\}^n \mid \textmd{REAL} (  \textbf{r} ) \in I   \}$ as the
the set of all $n$-bit strings whose real representation lies in $I$.
Let $T_1=  \textmd{STR }(\bar{I}_y(k), n)  \cap  S_0 $ and $ T_2=  \textmd{STR }(\bar{I}_{y-1}(k), n) \cap  S_0$. Then $ T_1 \setminus T_2 = \textmd{STR }(\bar{I}'_y(k), n)  \cap  S_0$.
Let $ \tau (y-1, k/\varepsilon) \overset{def}{=} \log  \frac{1}{| I_{y-1}(k)|} + \log(2^b+1)$.
 Then we prove that for all $y, k \in   \mathbb{Z}$,   $|\textmd{STR }( \bar{I'}_y (k), \tau (y-1, k/\varepsilon)) \cap S_0 |/|\textmd{STR }( \bar{I}_{y-1}(k), \tau (y-1, k/\varepsilon)) \cap S_0 | \leq  (2^b+1)  \cdot e \cdot \varepsilon$  and $|\textmd{SUFFIX}( \mathbf{u}, \tau (y-1, k/\varepsilon) ) \cap S_0 |  \leq  e \cdot  (2^b+1) /(1-e^{-1})$ where $\textbf{u}$ be the longest common prefix of all strings in  $ \bar{I} \overset{def}{=} \bar{I}_y(k) \cup  \bar{I}_{y-1}(k)$.  Therefore, by Definition \ref{BCLCSCON} and  Theorem \ref{CSDP}, we obtain Theorem \ref{bcl-dp}.







\begin{theorem}  \label{bcl-u}
  {\slshape  $\overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }$
has    $( \mathcal{BCL} (\delta, b), O(\frac{1}{\varepsilon} \cdot   \frac{1}{1- \delta}))$-utility and $(\mathcal{U}, O(\frac{1}{\varepsilon}))$-utility.
     }

\end{theorem}


\ignore{
Due to space limitations, the proof is postponed to Appendix \ref{utility}.
}


\vspace{1ex}

From Theorems \ref{bcl-dp} and  \ref{bcl-u},   we get that

\begin{theorem}
\label{spe}
  {\slshape  There exists an explict   $(\mathcal{BCL}(\delta, b), \xi)-$differentially private and $(\mathcal{U}, \rho)$-accurate mechanism M for the
 Hammimg weight queries where  $$\rho = \frac{ 2^{b \cdot \log (1+ \delta) -9}  }{ \xi } \cdot   (\frac{2}{1+ \delta})^{b+1} \cdot \frac{2^b+1}{(1+ \delta)^b} \cdot (\frac{1+ \delta}{1-\delta})^{\log  \frac{(2^b+1) e}{1-e^{-1}}   }  \cdot \frac{2^{11}}{1- (\frac{1+ \delta}{2})^2} \cdot e  >  \frac{ 2^{b \cdot \log (1+ \delta) -9}  }{ \xi }.$$   }


\end{theorem}




One the other hand, recall that Dodis and Yao \cite{DY14} obtained that

\begin{theorem} \label{bcl-imp}
{\slshape   If     $b   \geq \frac{  \log ( \xi \rho )  + 9 }{  \log(1+ \delta)} = \Omega (  \frac{\log( \xi \rho)+1}{ \delta})$,    then no $( \mathcal{BCL}(\delta, b), \xi  )-$differentially private and $(\mathcal{U}, \rho)$-accurate mechanism for the Hammimg weight queries exists.          }
\end{theorem}

Therefore, we conclude that

\begin{corollary}
\label{gen}
  {\slshape  Assume that the mechanism M is  $( \mathcal{BCL}(\delta, b), \xi)-$differentially private and $(\mathcal{U}, \rho)$-accurate for the
 Hammimg weight queries, then
  $ \rho > \frac{2^{b \cdot \log (1+ \delta) -9}}{\xi}$.
 }

\end{corollary}

Corollary \ref{gen}  implies that  it's possible to construct a   $( \mathcal{BCL}(\delta, b), \xi)-$differentially private and $(\mathcal{U}, \rho)$-accurate mechanism for  Hammimg weight queries, where $ \rho > \frac{2^{b \cdot \log (1+ \delta) -9}}{\xi}$.  In this paper, we show an explicit  construction of such mechanisms.


%  Therefore, Corollary \ref{spe} is a special case of Corollary \ref{gen}.





\begin{remark}

 If we  replace the truncation method in \cite{DLMV12}  with the one in Step 2 of this paper, we can prove
that the modified mechanism in \cite{DLMV12} satisfies the   compact $(\zeta', O(1) )$-SV-consistent sampling.
Therefore, the resulting mechanism is differentially private. We can also prove that it's accurate. The proofs are similar to Theorems \ref{bcl-dp} and  \ref{bcl-u}.

\end{remark}


\vspace{-3ex}


\subsubsection*{Acknowledgments.}

We would like to thank  Yevgeniy Dodis,  Adriana  L\'{o}pez-Alt,  and  Frank Mcsherry for helpful discussions.   In particular, we are very grateful to Yevgeniy Dodis for presenting the project ``Do differential privacy with BCL sources for reasonably high $b$''. This work is supported by the Natural Science Foundation of China (61370126), SKLSDE-2013ZX-19, the Fund for CSC
Scholarship Programme (201206020063).





\ignore{

\subsubsection*{Acknowledgments.}

We would like to thank  Yevgeniy Dodis,  Adriana  L\'{o}pez-Alt,  and  Frank Mcsherry for helpful discussions.
This work is supported by the Natural Science Foundation of China
(60973105, 61370126, and 61170189), the Fund for the Doctoral
Program of Higher Education of China (20111102130003), the Fund of the State Key Laboratory of Software Development Environment (SKLSDE-2013ZX-19, SKLSDE-2012ZX-11), the Innovation Foundation of Beihang University for Ph.D. Graduates under Grant No. 2011106014,
the Fund of the Scholarship Award for Excellent Doctoral Student granted
by Ministry of Education (400618), and the Fund for CSC
Scholarship Programme (201206020063).



}





\begin{thebibliography}{999999999} % 100 is a random guess of the total number of
%references
\setlength{\labelwidth}{0.6in}
%\addtolength{\leftmargin}{10em} % sets up alignment with the following line.
%\setlength{\marginparwidth}{3in}
\setlength{\itemindent}{0in}
%\setlength{\leftmargin}{-7in}




\bibitem[ACM$^+$14]{ACMPS14} P. Austrin, K.M. Chung, M. Mahmoody, R. Pass, and K. Seth. On the Impossibility of Cryptography with Tamperable Randomness. {\slshape CRYPTO 2014},  pages 462-479.
\bibitem[ACRT99]{ACRT99} A.E. Andreev, A.E.F. Clementi, J.D.P. Rolim, and L. Trevisan. Weak
random sources, hitting sets, and BPP simulations. {\slshape  SIAM J. Comput.}, 28(6): 2103-2116,
1999.
\bibitem[Blu86]{B86} M. Blum. Independent unbiased coin-flips from a correclated biased source-a finite state Markov
chain.  {\slshape Combinatorica}, 6(2): 97-108, 1986.
\bibitem[BD07]{BD07} C. Bosley and Y. Dodis. Does privacy require true randomness?  {\slshape TCC 2007},  pages 1-20.
  \bibitem[BDMN05]{BDMN05}   A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the
SuLQ framework. {\slshape  PODS 2005}, pages 128-138.
\bibitem[CFG$^+$85]{CFG$^+$85}  B. Chor, O. Goldreich, J. H{\aa}stad, J. Friedman, S. Rudich, and R. Smolensky.
 The Bit Extraction Problem
or $t$-resilient Functions.  {\slshape FOCS 1985}, pages 396-407.
\bibitem[CG88]{CG88} B. Chor and O. Goldreich. Unbiased bits from sources of weak randomness and
probabilistic communication complexity. {\slshape SIAM J. Comput.}, 17(2): 230-261, 1988.
\bibitem[DKRS06]{DKRS06} Y. Dodis, J. Katz, L. Reyzin, and A. Smith. Robust fuzzy extractors
and authenticated key agreement from close secrets.  {\slshape CRYPTO 2006},
 pages 232-250.
 \bibitem[DLMV12]{DLMV12} Y. Dodis, A.  L\'{o}pez-Alt, I. Mironov, and S.P. Vadhan.  Differential Privacy with Imperfect Randomness. {\slshape CRYPTO 2012},  pages   497-516.
 \bibitem[DMNS06]{DMNS06} C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to
sensitivity in private data analysis.  {\slshape TCC 2006},  pages 265-284.
\bibitem[Dod14]{D14}  Y. Dodis. SV-robust Mechanisms and Bias-Control Limited Source.    http:// \\www.cs.nyu.edu/courses/spring14/CSCI-GA.3220-001/lecture5.pdf
\bibitem[Dod01]{D01} Y. Dodis. New Imperfect Random Source with Applications to Coin-Flipping. {\slshape ICALP 2001}, pages 297-309.
\bibitem[DOPS04]{DOPS04} Y. Dodis, S.J. Ong, M. Prabhakaran, and A.  Sahai. On the
(im)possibility of cryptography with imperfect randomness.   {\slshape FOCS 2004}, pages 196-205.
\bibitem[DS02]{DS02} Yevgeniy Dodis and Joel Spencer.  On the (non)Universality of the One-Time Pad. {\slshape  FOCS 2002}, pages 376-385.
   \bibitem[Dwo08]{Dwo08}  C. Dwork. Differential Privacy: A Survey of Results.  {\slshape TAMC 2008}, pages 1-19.
\bibitem[DY14]{DY14} Y. Dodis and Y.Q. Yao. Privacy and Imperfect Randomness. IACR Cryptology ePrint Archive 2014: 623 (2014).
\bibitem[GRS09]{GRS09}  A. Ghosh, T. Roughgarden, and M. Sundararajan. Universally utilitymaximizing
privacy mechanisms.  {\slshape STOC 2009}, pages 351-360.
\bibitem[HT10]{HT10}   M. Hardt and K. Talwar. On the geometry of differential privacy. {\slshape STOC 2010}, pages 705-714.
\bibitem[LLS89]{LLS89} D. Lichtenstein, N. Linial, and M.E. Saks. Some extremal problems arising form discrete control processes.
{\slshape  Combinatorica},  9(3): 269-287, 1989.
\bibitem[MW97]{MW97} U.M. Maurer and S. Wolf. Privacy amplification secure against active adversaries. {\slshape CRYPTO 1997},  pages 307-321.
\bibitem[RVW04]{RVW04} O. Reingold, S. Vadhan, and A. Widgerson. No Deterministic Extraction from
Santha-Vazirani Sources: a Simple Proof. http://windowsontheory.org/2012/02/21/nodeterministic-extraction-from-santha-vazirani-sources-a-simple-proof/, 2004.
\bibitem[SV86]{SV86} M. Santha and U.V. Vazirani. Generating quasi-random sequences from semirandom
sources.  {\slshape  J. Comput. Syst. Sci.}, 33(1): 75-87, 1986.
\bibitem[VV85]{VV85} U.V. Vazirani and V.V. Vazirani. Random polynomial time is equal to slightly random
polynomial time. {\slshape FOCS 1985}, pages 417-428.
\bibitem[Zuc96]{Zuc96} D. Zuckerman. Simulating BPP using a general weak random source. {\slshape Algorithmica},
16(4/5): 367-391, 1996.

\end{thebibliography}

\newpage
\appendix



\section{Proof of Theorem \ref{CSDP}}    \label{CSDP2}
\begin{proof} Assume that $\frac{| T_1 \backslash T_2|}{| T_2|} \leq \zeta$ and  $n- |\mathbf{u}| \leq c$.
For any $ \mathbf{r},  \mathbf{r}'  \in \{0, 1\}^n$, denote $\mathbf{r}=r_1 \ldots r_n$ and  $\mathbf{r}'=r'_1  \ldots r'_n$ where $r_i,  r'_i  \in \{0, 1\}$ for $ i \in [n]$.    Then




\vspace{-1.5ex}



\begin{align*}
& \frac{  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1    \backslash T_2  ] }{\Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} \in T_2] } = \frac{\Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1  \backslash T_2 \mid  \mathbf{r} \in \textmd{SUFFIX} ( \mathbf{u} ) ]}{\Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_2  \mid  \mathbf{r} \in \textmd{SUFFIX} ( \mathbf{u} )] }\\&
  = \frac{ \sum \limits_{ \mathbf{r}' \in  T_1  \backslash T_2 }  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u}) ] }{    \sum \limits_{ \mathbf{r}' \in  T_2 }  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r}  = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u}) ]}
  \end{align*}


For any fixed $\mathbf{r}' \in \{0, 1\}^n$, we have $ \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r}  = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u}) ] = \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)} [r_{ |\mathbf{u}|+1} = r'_{ |\mathbf{u}|+1} \mid r_1 \ldots r_{|\mathbf{u}|    } =\mathbf{u}   ]  \times   \ldots  \times  \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)}  [r_n =r'_n  \mid r_1 \ldots  r_{|\mathbf{u}|}r_{|\mathbf{u}|+1} \ldots  r_{n-1} = \mathbf{u} r'_{ |\mathbf{u}|+1} \ldots r'_{n-1} ]$.  Therefore,   $ \Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} \in T_2]  \geq [ \frac{1}{2}(1- \delta)]^{n- | \mathbf{u}  |} \cdot |T_2|$  and  $ \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1    \backslash T_2  ] \leq  [ \frac{1}{2}(1+ \delta)]^{n- | \mathbf{u}  |-b} \cdot |T_1 \backslash T_2|$.   Correspondingly,


 \begin{align*}
 & \frac{  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1    \backslash T_2  ] }{\Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} \in T_2] }
  \leq \frac{  [ \frac{1}{2}(1+ \delta)]^{n- | \mathbf{u}  |-b}  }{ [ \frac{1}{2}(1- \delta)]^{n- | \mathbf{u}  |} }  \cdot \frac{|T_1 \backslash T_2|}{|T_2|}  \\&
  \leq (\frac{1+ \delta }{1- \delta})^{n- | \mathbf{u}  | } \cdot  [\frac{1}{2}(1+ \delta)]^{-b}    \cdot  \zeta \leq  (\frac{1+ \delta }{1- \delta})^{c} \cdot  [\frac{1}{2}(1+ \delta)]^{-b}    \cdot  \zeta
\end{align*}







\end{proof}

\ignore{


\section{Proof of Theorem \ref{CSDP}}    \label{CSDP2}









\begin{proof} Assume that $\frac{| T_1 \backslash T_2|}{| T_2|} \leq \zeta$.



For any $ \mathbf{r} \in \{0, 1\}^n$ and $\mathbf{r}' \in \{0, 1\}^n$, denote $\mathbf{r}=r_1 r_2 \ldots r_n$ where $r_i \in \{0, 1\}$ for $ i \in [n]$
 and  $\mathbf{r}'=r'_1 r'_2 \ldots r'_n$ where $r'_i \in \{0, 1\}$ for $ i \in [n]$.

Assume that $n- |\mathbf{u}| \leq c$, then



\vspace{0.15cm}


 {\wuhao{
\begin{align*}
& \frac{  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1    \backslash T_2  ] }{\Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} \in T_2] } = \frac{\Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_1  \backslash T_2 \mid  \mathbf{r} \in \textmd{SUFFIX} ( \mathbf{u} ) ]}{\Pr \limits_{ \mathbf{r} \leftarrow BCL(\delta, b, n)} [ \mathbf{r} \in T_2  \mid  \mathbf{r} \in \textmd{SUFFIX} ( \mathbf{u} )] }\\&
  = \frac{ \sum \limits_{ \mathbf{r}' \in  T_1  \backslash T_2 }  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r} = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u}) ] }{    \sum \limits_{ \mathbf{r}' \in  T_2 }  \Pr \limits_{\mathbf{r} \leftarrow BCL(\delta, b, n)} [\mathbf{r}  = \mathbf{r}' \mid \mathbf{r}' \in \textmd{SUFFIX} ( \mathbf{u}) ]}\\&
  =  \frac{   \sum \limits_{ \mathbf{r}' \in  T_1  \backslash T_2 }   \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)} [r_{ |\mathbf{u}|+1} = r'_{ |\mathbf{u}|+1} \mid r_1 \ldots r_{|\mathbf{u}|    } =\mathbf{u}   ]  \ldots
   \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)}  [r_n =r'_n  \mid r_1 \ldots r_{n-1} = \mathbf{u} r'_{ |\mathbf{u}|+1} \ldots r'_{n-1} ] }  { \sum \limits_{ \mathbf{r}' \in T_2 }
  \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)} [r_{ |\mathbf{u}|+1} = r'_{ |\mathbf{u}|+1} \mid r_1 \ldots r_{|\mathbf{u}|    } =\mathbf{u}] \ldots  \Pr \limits_{ \mathbf{r} \leftarrow  BCL(\delta, b, n)} [r_n =r'_n  \mid  r_1 \ldots r_{n-1} = \mathbf{u} r'_{ |\mathbf{u}|+1} \ldots r'_{n-1} ]  }\\&
  \leq \frac{  [ \frac{1}{2}(1+ \delta)]^{n- | \mathbf{u}  |-b}  }{ [ \frac{1}{2}(1- \delta)]^{n- | \mathbf{u}  |} }  \cdot \frac{|T_1 \backslash T_2|}{|T_2|}
  \leq (\frac{1+ \delta }{1- \delta})^{n- | \mathbf{u}  | } \cdot  [\frac{1}{2}(1+ \delta)]^{-b}    \cdot  \zeta \leq  (\frac{1+ \delta }{1- \delta})^{c} \cdot  [\frac{1}{2}(1+ \delta)]^{-b}    \cdot  \zeta
\end{align*}

}  }





\end{proof}




}

\section{Proof of Theorem \ref{bcl-dp}}   \label{bcl-dp2}



\begin{lemma} \label{lemmaI'_y(k)}
  {\slshape  Denote $I'_y(k)  \overset{def}{=} I_y(k) \setminus I_{y-1}(k) = [s_y(k-1), s_{y-1}(k-1))$.  For all $y, k  \in \mathbb{Z}$ and $  \varepsilon \in (0, 1)$, we have
$$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} <   e  \cdot  \varepsilon.$$    }
\end{lemma}




\begin{proof}
 \ignore{
 Recall that  $ s_y(k) \overset{def}{=}  \textsf{CDF}^{\textsf{Lap} }_{y, \frac{1}{\varepsilon} } ( \frac{k+ \frac{1}{2}}{  \varepsilon})$  for all $k \in \mathbb{Z}$   and

  \begin{equation}
\textsf{CDF}^{\textsf{Lap}}_{y,  \frac{1}{\varepsilon} } (x) =
\left\{
\begin{aligned}
 \frac{1}{2} \cdot e^{  \varepsilon ( x- y )}, ~~if~  x < y; \nonumber \\
  1 -  \frac{1}{2} \cdot e^{ -  \varepsilon (x- y) }, ~~if~  x \geq y. \nonumber \\
\end{aligned}
\right.
\end{equation}
}
Note that if $ x < y$, $\textsf{CDF}^{\textsf{Lap}}_{y,  \frac{1}{\varepsilon} } (x) < \frac{1}{2}$; otherwise,  $\textsf{CDF}^{\textsf{Lap}}_{y,  \frac{1}{\varepsilon} } (x) \geq \frac{1}{2}$.

\begin{align*}
& \frac{|I'_y(k)|}{|I_{y-1}(k)|} = \frac{s_{y-1}(k-1) - s_y(k-1)   }{ s_{y-1}(k) - s_{y-1}(k-1)  } = \frac{\textsf{CDF}^{\textsf{Lap}}_{y-1,  \frac{1}{\varepsilon} } ( \frac{k- \frac{1}{2}}{ \varepsilon} )-\textsf{CDF}^{\textsf{Lap}}_{y,  \frac{1}{\varepsilon} } ( \frac{k- \frac{1}{2}}{ \varepsilon})}{  \textsf{CDF}^{\textsf{Lap}}_{y-1,  \frac{1}{\varepsilon} } ( \frac{k + \frac{1}{2}}{ \varepsilon} )-\textsf{CDF}^{\textsf{Lap}}_{y-1,  \frac{1}{\varepsilon} } ( \frac{k- \frac{1}{2}}{ \varepsilon})}. \\
\end{align*}




We consider four cases:

$\textsf{Case 1}$: If $ \frac{1}{2} \leq s_y(k-1) < s_{y-1}(k-1) < s_{y-1}(k)$, then
$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} = \frac{e^{ \varepsilon +1} - e}{e-1}$.


$\textsf{Case 2}$:   If $ s_y(k-1) < \frac{1}{2} \leq  s_{y-1}(k-1) < s_{y-1}(k)$, then

$$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} = \frac{  1 -  \frac{1}{2} \cdot e^{ -  \varepsilon [  \frac{k -   \frac{1}{2}}{ \varepsilon }  - (y-1) ] } -  \frac{1}{2} \cdot e^{  \varepsilon (  \frac{k -   \frac{1}{2}}{ \varepsilon }   - y )} }{   1 -  \frac{1}{2} \cdot e^{ -  \varepsilon [ \frac{k +   \frac{1}{2}}{ \varepsilon }- (y-1)]} -  \{ 1 -  \frac{1}{2} \cdot e^{ -  \varepsilon [\frac{k -   \frac{1}{2}}{ \varepsilon }- (y-1)] } \} } .$$
For simplicity, denote $ v \overset{def}{=}  \frac{k -   \frac{1}{2}}{ \varepsilon }  - y$.   By the assumption, we have that $ -1 \leq    v < 0$.  Correspondingly,

\begin{align*}
& \frac{|I'_y(k)|}{|I_{y-1}(k)|} = \frac{1-  \frac{1}{2} e^{- \varepsilon (v+1)}  -   \frac{1}{2} e^{ \varepsilon v}}{ - \frac{1}{2} e^{- \varepsilon (v+1 + \frac{1}{ \varepsilon})} +  \frac{1}{2} e^{- \varepsilon (v+1)}}\\
& = \frac{ -(e^{ \varepsilon v}-1)^2 - e^{- \varepsilon}  +1    }{ -e^{ -1 - \varepsilon} + e^{- \varepsilon}} \leq  \frac{ - e^{- \varepsilon}  +1    }{ -e^{ -1 - \varepsilon} + e^{- \varepsilon}} = \frac{e^{ \varepsilon +1}-e}{e-1}. \\
\end{align*}


$\textsf{Case 3}$:    If $ s_y(k-1) <   s_{y-1}(k-1) <   \frac{1}{2} \leq  s_{y-1}(k)$, then
$$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} =  \frac{  \frac{1}{2} \cdot e^{  \varepsilon [   \frac{k -   \frac{1}{2}}{ \varepsilon }  - (y-1)]} - \frac{1}{2} \cdot e^{  \varepsilon (   \frac{k -   \frac{1}{2}}{ \varepsilon }   - y  )}   }{
  1 -  \frac{1}{2} \cdot e^{ -  \varepsilon [ \frac{k +   \frac{1}{2}}{ \varepsilon }- (y-1)] } -\frac{1}{2} \cdot e^{  \varepsilon [\frac{k -   \frac{1}{2}}{ \varepsilon }- (y-1)] }    } .$$
For simplicity, denote $ v \overset{def}{=}  \frac{k -   \frac{1}{2}}{ \varepsilon }  - y$.   By the assumption, we have that $ -1 - \frac{1}{ \varepsilon} \leq  v < -1$.  Correspondingly,


\begin{align*}
&  \frac{|I'_y(k)|}{|I_{y-1}(k)|} =  \frac{  \frac{1}{2} \cdot e^{  \varepsilon ( v +1)} - \frac{1}{2} \cdot e^{ \varepsilon  v} }{ 1 -  \frac{1}{2} \cdot e^{ -  \varepsilon ( v + \frac{1}{ \varepsilon}  +1  ) } -\frac{1}{2} \cdot e^{  \varepsilon ( v +1) }    } \\ &
~~~~~~~~~~~~= \frac{e^\varepsilon -1}{ 2 \cdot e^{- \varepsilon v} - e^{-2 \varepsilon v - \varepsilon -1} -e^\varepsilon  }  \\&
~~~~~~~~~~~~= \frac{e^\varepsilon -1 }{- ( e^{- \varepsilon v - \frac{1+ \varepsilon}{2}}  - e^{   \frac{1+ \varepsilon}{2} })^2  + e^{1+ \varepsilon} -e^\varepsilon} \\&
~~~~~~~~~~~~ < \frac{e^\varepsilon -1 }{ - ( e^{ \frac{ \varepsilon -1}{2}}  - e^{   \frac{1+ \varepsilon}{2} })^2   + e^{1+ \varepsilon} -e^\varepsilon  }\\&
~~~~~~~~~~~~= \frac{1- e^{- \varepsilon}}{  1- e^{-1}}.
\end{align*}



$\textsf{Case 4}$:   If $ s_y(k-1) <   s_{y-1}(k-1) <  s_{y-1}(k) < \frac{1}{2}$, then
$$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} = \frac{  \frac{1}{2} \cdot e^{  \varepsilon [   \frac{k -   \frac{1}{2}}{ \varepsilon }  - (y-1)]} - \frac{1}{2} \cdot e^{  \varepsilon (   \frac{k -   \frac{1}{2}}{ \varepsilon }   - y  )}   }{
    \frac{1}{2} \cdot e^{   \varepsilon [ \frac{k +   \frac{1}{2}}{ \varepsilon }- (y-1)] } -\frac{1}{2} \cdot e^{  \varepsilon [\frac{k -   \frac{1}{2}}{ \varepsilon }- (y-1)] }    }    =\frac{1-e^{- \varepsilon}}{e-1} .$$


For  $  \varepsilon  \in (0, 1)$, we have
$$ \frac{1-e^{- \varepsilon}}{e-1}  < \frac{1- e^{- \varepsilon}}{  1- e^{-1}}  = \frac{e- e^{1- \varepsilon}}{e-1} <  \frac{ e^\varepsilon \cdot (e- e^{1- \varepsilon})}{e-1} =   \frac{e^{ \varepsilon +1 } - e}{e-1} < e \cdot \varepsilon.$$

 The last inequality holds according to the following three facts: (1)    $g_1(x)   \overset{def}{=}   \frac{e^{ x +1 } - e}{e-1} $ is a convex function; (2)   $g_2(x)   \overset{def}{=}  e \cdot x$ is a linear function; (3) $g_1(0)= g_2(0)$  and $g_1(1)= g_2(1)$.



\end{proof}









 Let $I''_y(k)  \overset{def}{=} I_{y-1}(k) \setminus I_y(k) = [s_y(k), s_{y-1}(k))$.  Similarly, we can get that that there exists a constant $C$ such that $ \frac{|I''_y(k)|}{|I_y(k)|}  < C \cdot  \varepsilon$  for $y, k  \in \mathbb{Z}$ and $  \varepsilon \in (0, 1)$.


\begin{lemma}
\label{f-inf}   {\slshape
 % Denote  $  \tau (y-1, k/\varepsilon) \overset{def}{=} \log  \frac{1}{| I_{y-1}(k)|} + c_0$,
% and   $  \tau (y-1, k/\varepsilon) \overset{def}{=} \log  \frac{1}{| I_{y-1}(k)|} + c_0$,
% where $c_0$ is a constant such that  $2^{c_0 -b } -2^{-b} \geq 1$.
  For all $y, k  \in \mathbb{Z}$,
we have
\vspace{-0.5ex}
\begin{itemize}
\item [(1)] $ | \bar{I}'_y(k) | \leq | I'_y(k)|, $
\item [(2)]  $|I_{y-1}(k)|+ 2^{-\tau (y-1, k/\varepsilon)}     \geq  |  \bar{I}_{y-1}(k)| \geq |I_{y-1}(k)|- 2^{-\tau (y-1, k/\varepsilon)},  $
\item [(3)] $ |I_{y}(k)| + 2^{-\tau (y-1, k/\varepsilon)}  \geq |  \bar{I}_{y}(k)| \geq |I_{y}(k)|- 2^{-\tau (y-1, k/\varepsilon)}.  $
\end{itemize}
}
\end{lemma}

\begin{proof}
 ~\\
 (1) Since $s_{y-1}(k-1) \geq  \bar{s}_{y-1}(k-1)$, and $\bar{s} _{y}(k-1) \geq s _{y}(k-1) - 2^{-\tau (y-1, k/\varepsilon)} + 2^{-\tau (y-1, k/\varepsilon)}$, we get $    | \bar{I}'_y(k) | \leq | I'_y(k)|$. \\
(2) One one hand, since $  \bar{s}_{y-1}(k) \geq s _{y-1}(k) - 2^{-\tau (y-1, k/\varepsilon)}$ and $  \bar{s}_{y-1}(k-1) \leq s _{y-1}(k-1 )$,  we have  $ |  \bar{I}_{y-1}(k)| \geq |I_{y-1}(k)|- 2^{-\tau (y-1, k/\varepsilon)}. $   One the other hand,
since $   s _{y-1}(k) \geq  \bar{s}_{y-1}(k)$ and $ s _{y-1}(k-1)  \leq \bar{s}_{y-1}(k-1) + 2^{-\tau (y-1, k/\varepsilon)}$, we have $ |I_{y-1}(k)|+ 2^{-\tau (y-1, k/\varepsilon)} \geq   |  \bar{I}_{y-1}(k)|$. Hence, Lemma  \ref{f-inf} (2) holds.  \\
(3) One one hand,
 since  $ \bar{s}_{y}(k) \geq s _{y}(k) - 2^{-\tau (y-1, k/\varepsilon)} + 2^{-\tau (y-1, k/\varepsilon)} $ and $  \bar{s}_{y}(k-1) \leq s _{y}(k-1) + 2^{-\tau (y-1, k/\varepsilon)}$,  we have  $ |  \bar{I}_{y}(k)| \geq |I_{y}(k)|- 2^{-\tau (y-1, k/\varepsilon)}. $ One the other hand,
since $ \bar{s}_{y}(k) \leq   s _{y}(k) + 2^{-\tau (y-1, k/\varepsilon)}$ and $  \bar{s}_{y}(k-1) \geq s_y(k-1)$, we have $|  \bar{I}_{y}(k)| \leq   |I_{y}(k)| + 2^{-\tau (y-1, k/\varepsilon)}$. Hence, Lemma  \ref{f-inf} (3) holds.


\end{proof}







Assume that $Y$ is a distribution BCL$(\delta, b, n)$ and $ S_0 \overset{def}{=} \{ \mathbf{r} \in \{0,  1\}^n \mid \Pr[ Y= \mathbf{r}] \neq 0 \}$.
Denote $\textmd{STR }(I, n) \overset{def}{=} \{ \textbf{r} \in \{0, 1\}^n \mid \textmd{REAL} (  \textbf{r} ) \in I   \}$ as the
the set of all $n$-bit strings whose real representation lies in $I$.
Let $T_1=  \textmd{STR }(\bar{I}_y(k), n)  \cap  S_0 $ and $ T_2=  \textmd{STR }(\bar{I}_{y-1}(k), n) \cap  S_0$. Then $ T_1 \setminus T_2 = \textmd{STR }(\bar{I}'_y(k), n)  \cap  S_0$. By induction, it can be easily seen that
$  2^{n-b} \leq  | S_0|   \leq 2^n.  $





% Assume that  $N$ is fixed. If $b=1$, then it's trivial that $ 2^{N-1} \leq   |S|   \leq 2^N$.
% Assume that  when $b=i$, we have $ 2^{N-i} \leq  |S|   \leq 2^N$.   Then






\begin{lemma}
\label{ori}
{\slshape  Let $ \tau (y-1, k/\varepsilon) \overset{def}{=} \log  \frac{1}{| I_{y-1}(k)|} + \log(2^b+1)$.  Then for all $y, k  \in \mathbb{Z}$,

\vspace{-0.5ex}
\begin{itemize}
\item [(1)]  $ |\textmd{STR }( \bar{I'}_y (k), \tau (y-1, k/\varepsilon) ) \cap S_0|  \leq   (2^{-b} +1) \cdot e \cdot \varepsilon,$
\item [(2)]  $|\textmd{STR }( \bar{I}_{y-1}(k), \tau (y-1, k/\varepsilon)) \cap S_0 | \geq 1.$
\end{itemize}
}

\end{lemma}



\begin{proof}
~\\
(1) Let  $n \overset{def}{=} \tau (y-1, k/\varepsilon)$ for shorthand.
Consider $ |\bar{I'}_y (k)| $ as the probability of sampling a sequence $\mathbf{r}$ from $U_{S_0}$  such that $\mathbf{r} \in
\textmd{STR }( \bar{I'}_y (k), n) \cap S_0,$  where $  2^{n-b} \leq  |S_0|   \leq 2^n$.  Hence,
$|\bar{I'}_y (k) | = \sum \limits_{ \mathbf{r} \in STR(\bar{I'}_y (k),   n) \cap S_0}  \frac{1}{ |S_0| }    \geq   \sum \limits_{ \mathbf{r} \in STR(\bar{I'}_y (k),   n) \cap S_0} \frac{1}{2^{n}}= | STR(\bar{I'}_y (k),   n) \cap S_0| \cdot  \frac{1}{2^{n}}.$
Therefore,
$|\textmd{STR }(\bar{I'}_y (k), n) \cap S_0 | \leq  2^{n} \cdot |\bar{I'}_y (k) | \leq 2^{n}  \cdot  |I'_y(k)|  =(2^b +1 ) \cdot \frac{  |I'_y(k)| }{ |I_{y-1}(k)| } \leq  ( 2^b +1  ) \cdot e \cdot \varepsilon.$  \\
(2) Since $ |\bar{I}_{y-1} (k) | =  \sum \limits_{ \mathbf{r} \in \textmd{STR }(\bar{I}_{y-1} (k),   n) \cap S_0 }  \frac{1}{|S_0| }
\leq  \sum \limits_{ \mathbf{r} \in \textmd{STR }(\bar{I}_{y-1} (k),   n) \cap S_0} ( \frac{1}{2})^{n-b} = | \textmd{STR }(\bar{I}_{y-1} (k),   n) \cap S_0| \cdot  ( \frac{1}{2})^{n-b}, $
\ignore{
\begin{align*}
& |\bar{I}_y (k) | =  \sum \limits_{ \mathbf{r} \in \textmd{STR }(\bar{I}_{y-1} (k),   n) \cap S_0 }  \frac{1}{|S_0| } \\
& \leq   \sum \limits_{ \mathbf{r} \in \textmd{STR }(\bar{I}_y (k),   n) \cap S_0 } ( \frac{1}{2})^{n-b} = | \textmd{STR }(\bar{I}_y (k),   n) \cap S_0| \cdot  ( \frac{1}{2})^{n-b}, \\
\end{align*}  }
we get
$$|\textmd{STR }(\bar{I}_{y-1} (k), n) \cap S_0 | \geq   2^{n-b}   \cdot |\bar{I}_{y-1} (k) | \geq  2^{n-b} \cdot ( |I_{y-1} (k)| -2^{-n} ) =1.$$

\ignore{
$$ |\textmd{STR }(\bar{I}_{y} (k), n) \cap S_0 | \geq    2^{n-b} \cdot |\bar{I}_{y} (k) | \geq  2^{n-b} \cdot ( |I_{y} (k)| -2^{-n} ) =    e^{\varepsilon} \cdot (1 + 2^{-b})-2^{-b} , $$
where the last equality follows by  $\frac{ |I_y(k) | }{ |I_{y-1}(k) |}= ????$.

}



\end{proof}





\begin{remark}
  We can guarantee  that  $n$ is  legal, in the
sense that the modification of the endpoints in  $ I_{y-1} (k)$  and $I_{y} (k) $ with respect to $n$ does not cause intervals to ``disappear'' or for consecutive intervals to ``overlap''.
\end{remark}


From Lemma  \ref{ori}, we get that


\begin{theorem}
\label{CS}
  {\slshape    Denote $ \tau (y-1, k/\varepsilon) \overset{def}{=} \log  \frac{1}{| I_{y-1}(k)|} + \log(2^b+1)$.     For all $y, k  \in \mathbb{Z}$,
we have $$ \frac{|\textmd{STR }( \bar{I'}_y (k), \tau (y-1, k/\varepsilon)) \cap S_0 |}{|\textmd{STR }( \bar{I}_{y-1}(k), \tau (y-1, k/\varepsilon)) \cap S_0 |} \leq  (2^b+1)  \cdot e \cdot \varepsilon. $$   }

\end{theorem}


\begin{theorem}
\label{BCLCS}
{\slshape  Denote $ \tau (y-1, k/\varepsilon) \overset{def}{=} \log  \frac{1}{| I_{y-1}(k)|} + \log(2^b+1)$.  Let $\textbf{u}$ be the longest common prefix of all strings in  $ \bar{I} \overset{def}{=} \bar{I}_y(k) \cup  \bar{I}_{y-1}(k)$.  Then
$$|\textmd{SUFFIX}( \mathbf{u}, \tau (y-1, k/\varepsilon) ) \cap S_0 |  \leq   \frac{e \cdot  (2^b+1) }{1-e^{-1}}.$$    }

\end{theorem}




\begin{proof}  For simplicity, let  $n \overset{def}{=} \tau (y-1, k/\varepsilon)$.
Let  $\textbf{u}'$ be the longest common prefix of all strings in $I \overset{def}{=} I_y(k) \cup  I_{y-1}(k)$.
Then we have $ |\textmd{SUFFIX}(\textbf{u}, n)| \leq |\textmd{SUFFIX}(\textbf{u}', n)|$.
We bound  $|\textmd{SUFFIX}(\textbf{u}, n)| $ by bounding the number of $n$-bit strings to the left or right of $\bar{I}$ (depending on which endpoint of the interval [0, 1] is closer to $I$).

Now we calculate the size of the interval $[s_y(k-1), 1]$ (resp. $[0, s_{y-1}(k)]$), which is an approximation of the size of $[\bar{s}_y(k-1), 1]$ (resp.
$[0, \bar{s}_{y-1}(k)]$). Then we can upper bound how many $n$-bit strings there are in the interval  $[\bar{s}_y(k-1), 1]$ (resp. $[0, \bar{s}_{y-1}(k)]$).  Let $S \overset{def}{=} [s_y(k-1), 1]$.

Recall that $ s_y(k) \overset{def}{=}  \textsf{CDF}^{\textsf{Lap} }_{y, \frac{1}{\varepsilon} } ( \frac{k+ \frac{1}{2}}{  \varepsilon})$  for all $k \in \mathbb{Z}$   and
  \begin{equation}
\textsf{CDF}^{\textsf{Lap}}_{y,  \frac{1}{\varepsilon} } (x) =
\left\{
\begin{aligned}
 \frac{1}{2} \cdot e^{  \varepsilon ( x- y )}, ~~~~ ~~if~  x < y; \nonumber \\
  1 -  \frac{1}{2} \cdot e^{ -  \varepsilon (x- y) }, ~~if~  x \geq y. \nonumber \\
\end{aligned}
\right.
\end{equation}

Note that if $ x < y$, $\textsf{CDF}^{\textsf{Lap}}_{y,  \frac{1}{\varepsilon} } (x) < \frac{1}{2}$; otherwise,  $\textsf{CDF}^{\textsf{Lap}}_{y,  \frac{1}{\varepsilon} } (x) \geq \frac{1}{2}$.

 $ I'_y(k)=[s_y(k-1), s_{y-1}(k-1))$ and $ I'_{y+1}(k)=[s_{y+1}(k-1), s_{y}(k-1))$.

  For simplicity, denote  $v \overset{def}{=} \frac{k   - \frac{1}{2}}{ \varepsilon} -y$.  We consider four cases.

$\textsf{Case~1}$:  Assume that $ \frac{1}{2}  \leq s_{y+1}(k-1) < s_y(k-1) <    s_{y-1}(k-1)$.  Then $v \geq 1$.
$$ \frac{|I'_y(k)|}{|I'_{y+1}(k)|}  = \frac{ 1 -  \frac{1}{2} \cdot e^{ -  \varepsilon [\frac{k   - \frac{1}{2}}{ \varepsilon}   - (y-1)] } -  1 +  \frac{1}{2} \cdot e^{ -  \varepsilon (\frac{k   - \frac{1}{2}}{ \varepsilon}   - y) } }{ 1 -  \frac{1}{2} \cdot e^{ -  \varepsilon ( \frac{k   - \frac{1}{2}}{ \varepsilon}   - y) } - 1 +  \frac{1}{2} \cdot e^{ -  \varepsilon  [\frac{k   - \frac{1}{2}}{ \varepsilon}   - (y+1)]   } } = \frac{1}{e^\varepsilon}.$$

$\textsf{Case~2}$:   Assume that $   s_{y+1}(k-1) <  \frac{1}{2} \leq s_y(k-1) <    s_{y-1}(k-1)$.  Then $0 \leq  v < 1$.
$$ \frac{|I'_y(k)|}{|I'_{y+1}(k)|} =  \frac{e^{- \varepsilon v } - e^{- \varepsilon(v +1 )}  }{ 2 - e^{- \varepsilon v}  - e^{\varepsilon (v -1)}  }= \frac{1 - e^{- \varepsilon}}{ - e^{- \varepsilon} (e^{ \varepsilon v} - e^\varepsilon)^2 + e^\varepsilon -1}.$$
Hence,   $$ \frac{1}{e^\varepsilon} <   \frac{|I'_y(k)|}{|I'_{y+1}(k)|}  \leq 1.$$

$\textsf{Case~3}$:   Assume that $   s_{y+1}(k-1) <  s_y(k-1) <  \frac{1}{2} \leq  s_{y-1}(k-1)$.   Then    $ -1 \leq v < 0$.

\begin{align*}
& \frac{|I'_y(k)|}{|I'_{y+1}(k)|} =  \frac{  1 -  \frac{1}{2} \cdot e^{ -  \varepsilon [   \frac{k   - \frac{1}{2}}{ \varepsilon}   - (y-1)] }  -  \frac{1}{2} \cdot e^{  \varepsilon ( \frac{k   - \frac{1}{2}}{ \varepsilon}  - y )}   }{\frac{1}{2} \cdot e^{  \varepsilon ( \frac{k   - \frac{1}{2}}{ \varepsilon}  - y )}-\frac{1}{2} \cdot e^{  \varepsilon [\frac{k   - \frac{1}{2}}{ \varepsilon}    -  (y+1) ]}}\\
& =  \frac{  1 -  \frac{1}{2} \cdot e^{ -  \varepsilon (  v+1) }  -  \frac{1}{2} \cdot e^{  \varepsilon v}   }{\frac{1}{2} \cdot e^{  \varepsilon v}-\frac{1}{2} \cdot e^{  \varepsilon (v-1 )}}\\
&= \frac{  - ( e^{- \varepsilon v - \frac{\varepsilon}{2}}  - e^{ \frac{\varepsilon}{2}  } )^2 + e^\varepsilon -1}{1-e^{- \varepsilon}}. \\
\end{align*}
Therefore,  $$   1 < \frac{|I'_y(k)|}{|I'_{y+1}(k)|} \leq  e^\varepsilon.$$

$\textsf{Case~4}$:   Assume that $   s_{y+1}(k-1) <  s_y(k-1) < s_{y-1}(k-1) <  \frac{1}{2}$.   Then  $ v < -1$.

\begin{align*}
&  \frac{|I'_y(k)|}{|I'_{y+1}(k)|} =   \frac{  \frac{1}{2} \cdot e^{  \varepsilon [ \frac{k   - \frac{1}{2}}{ \varepsilon}  - (y-1) ]}    -  \frac{1}{2} \cdot e^{  \varepsilon ( \frac{k   - \frac{1}{2}}{ \varepsilon}  - y )}   }{\frac{1}{2} \cdot e^{  \varepsilon ( \frac{k   - \frac{1}{2}}{ \varepsilon}  - y )}-\frac{1}{2} \cdot e^{  \varepsilon [\frac{k   - \frac{1}{2}}{ \varepsilon}    -  (y+1) ]}}  =  \frac{    \frac{1}{2} \cdot e^{  \varepsilon ( v+1 )}   -  \frac{1}{2} \cdot e^{  \varepsilon v}   }{\frac{1}{2} \cdot e^{  \varepsilon v}-\frac{1}{2} \cdot e^{  \varepsilon (v-1 )}} = e^\varepsilon.\\
\end{align*}

We only analyze $\textsf{Case~1}$, the other cases are analogous.




Since  $ I'_y(k)$ and   $ I'_{y+1}(k)$  are  consecutive  intervals for all $y \in \mathbb{Z}$,
we have
$$|S|= \sum \limits_{j=- \infty}^y | I'_j(k) | \leq \sum \limits_{j=- \infty}^y | I'_y(k) | (e^{-\varepsilon })^{y-j} = \frac{| I'_y(k) | }{1- e^{-\varepsilon}} \leq \frac{| I'_y(k) |  }{ (1- \frac{1}{e} ) \cdot \varepsilon}.$$


The last inequality holds from the facts: (1)  $g_1(x) \overset{def}{=} 1 -  e^{-x} $ is a concave function; (2) $g_2(x) \overset{def}{=} (1- \frac{1}{e} ) \cdot x$ is a linear function; (3) $g_1(0) =g_2(0)$ and $ g_1(1) =g_2(1)$.


 \ignore{
 $ 1- e^{-x} \geq  b \cdot x >0$ for all $x \in  (0, 1)$ implies $0 < b \leq 1- \frac{1}{e}$. In fact, $ 1- e^{-x} \geq  b \cdot x$ is equivalent to
 $1- b \cdot x \geq    e^{-x} $.  From the images of the functions
 $g'_1(x)   \overset{def}{=}  1- b \cdot x $ and $g'_2(x) \overset{def}{=} e^{-x}$, $b$ only needs to
satisfy that $g'_1(1) \geq g'_2(1)$.  Hence, $0 < b \leq 1- \frac{1}{e}$.
}




Let $ \bar{S} \overset{def}{=} [\bar{s}_y(k-1), 1]$. Then  $| \bar{S}| \leq |S| \leq \frac{| I'_y(k) |  }{ (1- \frac{1}{e} ) \cdot \varepsilon}$.

On the other hand,
 $| \bar{S}|$ can be considered as the probability of sampling a sequence $ \mathbf{r}$ from the uniform distribution $U_{S_0 }$  such that
$ \mathbf{r}  \in \textmd{STR }( \bar{S}, n ) \cap S_0$  and $  2^{n-b}  \leq  |S_0| \leq 2^{n}$.
Therefore,
$$ | \bar{S}| = \sum \limits_{\mathbf{r} \in \textmd{STR }(\bar{S}, n) \cap S_0 }  \frac{1}{ |S_0|  }  \geq  \sum \limits_{\mathbf{r} \in \textmd{STR }(\bar{S}, n) \cap S_0 } ( \frac{1}{2})^n = | \textmd{STR } (\bar{S}, n) \cap S_0| \cdot ( \frac{1}{2})^n.$$
Correspondingly, $$ | \textmd{STR }(\bar{S}, n) \cap S_0| \leq   2^n \cdot | \bar{S}| \leq 2^n \cdot  \frac{| I'_y(k) |  }{  (1- \frac{1}{e} ) \cdot \varepsilon} = \frac{| I'_y(k) |}{ |I_{y-1}(k) |} \cdot \frac{(2^b+1)}{ (1- \frac{1}{e} ) \cdot \varepsilon} \leq  \frac{e \cdot  (2^b+1) }{1-e^{-1}}.$$
Hence, $$|\textmd{SUFFIX}( \mathbf{u}, n ) \cap S_0| \leq  | \textmd{STR }(\bar{S}, n) \cap S_0|  \leq    \frac{e \cdot  (2^b+1) }{1-e^{-1}}.$$
\end{proof}



Combining  Theorems \ref{CSDP}, \ref{CS}, and \ref{BCLCS}, we get Theorem \ref{bcl-dp}.



\section{Proof of Theorem \ref{bcl-u}} \label{utility}




\begin{proof} We only need to  prove that for all neighboring databases $D_1, D_2 \in   \mathcal{D}$, all  $f \in \mathcal{F}$, and all  $ BCL(\delta, b)  \in \mathcal{BCL} (\delta, b)$,
$ \mathbb{E}_{ \mathbf{r} \leftarrow  BCL(\delta, b)} [ | \overline{M} ^{  \textsf{CBCLCS} }_{\varepsilon } (D_1, f; \textbf{r}) -  f(D_1)| ]$ and  $ \mathbb{E}_{ \mathbf{r} \leftarrow  BCL(\delta, b)} [ | \overline{M} ^{  \textsf{CBCLCS} }_{\varepsilon } (D_2, f; \textbf{r}) -  f(D_2)| ]$ are both upper bounded by $O(\frac{1}{\varepsilon} \cdot   \frac{1}{1- \delta})$. Without loss of generality, assume that $f(D_1)=y$ and $f(D_2)=y-1$. Then
$ \mathbb{E}_{ \textbf{r} \leftarrow BCL(\delta, b) } [|  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D_1, f; \textbf{r})-y|]
 = \sum \limits_{k = - \infty}^{\infty} \Pr \limits_{ \textbf{r} \leftarrow  BCL (\delta, b) } [  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D_1, f; \textbf{r}) = \frac{k}{ \varepsilon}] \cdot | \frac{k}{ \varepsilon} - y|.$



Let $\textbf{a}$ be the longest common  prefix of all strings in $\textmd{STR } (\bar{I}_y(k), \tau(y-1, k/\varepsilon))$. Denote $I_0  \overset{def}{=} \textmd{SUFFIX} (\textbf{a}0, \tau(y-1, k/\varepsilon)) \cap \textmd{STR} (\bar{I}_y(k), \tau(y-1, k/\varepsilon))$ and $I_1 \overset{def}{=} \textmd{SUFFIX} (\textbf{a}1, \tau(y-1, k/\varepsilon)) \cap \textmd{STR} (\bar{I}_y(k),\tau(y-1, k/\varepsilon))$. Thus, $I_0 \cup I_1 =  \textmd{STR} (\bar{I}_y(k),\tau(y-1, k/\varepsilon))$.
Correspondingly,  we have
$$ \Pr \limits_{ \textbf{r} \leftarrow BCL(\delta, b) } [  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D_1, f; \textbf{r}) = \frac{k}{\varepsilon}] \leq ( \frac{1+ \delta}{2})^{|\textbf{a}0 |}  + ( \frac{1+ \delta}{2})^{|\textbf{a}1 |} \leq 2 \cdot ( \frac{1+ \delta}{2})^{  \log ( \frac{1}{ | \bar{I}_y(k)|}  )}.$$

Similarly, we can conclude that
$$ \Pr \limits_{ \textbf{r} \leftarrow BCL(\delta, b) } [  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D_2, f; \textbf{r}) = \frac{k}{\varepsilon}]  \leq 2 \cdot ( \frac{1+ \delta}{2})^{  \log ( \frac{1}{ | \bar{I}_{y-1}(k)|}  )}.$$



\begin{claim}

For all $y, k \in \mathbb{Z}$, we have $|I_y(k)|  \leq \frac{1}{2} \cdot  e^{- \frac{1}{2}} \cdot  (e-1) \cdot e^{-|k- \varepsilon y|}$.

\end{claim}

\begin{proof}  We consider three cases.

\textsf{Case~1}: Assume that $ \frac{k - \frac{1}{2}}{\varepsilon} - y \geq 0$  and  $ \frac{k + \frac{1}{2}}{\varepsilon} - y \geq 0$.
Then $$ | I_y(k) | = 1 - \frac{1}{2} \cdot  e^{ - \varepsilon ( \frac{k + \frac{1}{2}}{\varepsilon} - y  )   }  - [ 1 - \frac{1}{2} \cdot  e^{ - \varepsilon ( \frac{k - \frac{1}{2}}{\varepsilon} - y  ) } ]  = \frac{1}{2} \cdot  e^{- \frac{1}{2}} \cdot  (e-1) \cdot e^{-|k- \varepsilon y|}.$$

\textsf{Case~2}: Assume that $ \frac{k - \frac{1}{2}}{\varepsilon} - y  < 0$  and  $ \frac{k + \frac{1}{2}}{\varepsilon} - y \geq 0$.
From the fact that $ 1 - \frac{1}{2}  x  \leq  \frac{1}{2}   \cdot \frac{1}{x}$ for all $x > 0$, we obtain
 $$|I_y(k)| =  1 - \frac{1}{2} \cdot  e^{ - \varepsilon ( \frac{k + \frac{1}{2}}{\varepsilon} - y  )   } -  \frac{1}{2} \cdot  e^{  \varepsilon ( \frac{k - \frac{1}{2}}{\varepsilon} - y  )   }   \leq \frac{1}{2} \cdot  e^{- \frac{1}{2}} \cdot  (e-1) \cdot e^{-|k- \varepsilon y|}.$$

\textsf{Case~3}: Assume that $ \frac{k - \frac{1}{2}}{\varepsilon} - y  <  0$  and  $ \frac{k + \frac{1}{2}}{\varepsilon} - y  < 0$.
Then $ | I_y(k)|  = \frac{1}{2} \cdot  e^{- \frac{1}{2}} \cdot  (e-1) \cdot e^{-|k- \varepsilon y|}$.

\end{proof}

 By Lemma \ref{f-inf},  $ |\bar{I}_y(k)| \leq |I_y(k)| + 2^{-\tau(k-1,y)} = |I_y(k)| +  \frac{1}{2^b+1}   |I_{y-1}(k)|$.  Hence,


\begin{align*}
&  \log ( \frac{1}{ | \bar{I}_y(k)|}  ) \geq  \log \frac{1}{ \frac{1}{2} e^{- \frac{1}{2}} (e-1) (1+ \frac{1}{2^b+1}  ) }  + \log ( e^ {\min \{ | k- \varepsilon y |,   | k- \varepsilon y + \varepsilon | \} } ) \\
& ~~~~~~~~~~~~~~~~\geq  \min \{ | k- \varepsilon y |,   | k- \varepsilon y + \varepsilon | \}  \geq  | k- \varepsilon y | -1. \\
\end{align*}




Similarly, $ \log ( \frac{1}{ | \bar{I}_{y-1}(k)|}  ) \geq  | k- \varepsilon y | -1.$ Therefore,

\ignore{
\begin{align*}
&  \sum \limits_{k = - \infty}^{\infty} \Pr \limits_{ \textbf{r} \leftarrow \textsf{BCL} (\delta, b, n) } [  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D, f; \textbf{r}) = \frac{k}{ \varepsilon}] \cdot | \frac{k}{ \varepsilon} - y | \\
& \leq \sum \limits_{k =\lceil \varepsilon y \rceil}^{\infty} 2 \cdot ( \frac{1+ \delta}{2})^{ k -\varepsilon y -1} \cdot (\frac{k}{ \varepsilon} - y)  \\
& = 2 \cdot  \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{-\varepsilon y -1} \cdot \sum \limits_{k =\lceil \varepsilon y \rceil}^{\infty} ( \frac{1+ \delta}{2})^k \cdot (k- \varepsilon y) \\
&= 2 \cdot  \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{-\varepsilon y -1} \cdot  [\sum \limits_{k =\lceil \varepsilon y \rceil}^{\infty} ( \frac{1+ \delta}{2})^k \cdot k -  \sum \limits_{k =\lceil \varepsilon y \rceil}^{\infty} ( \frac{1+ \delta}{2})^k \cdot  \varepsilon y ]\\
& =  2 \cdot \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{-\varepsilon y -1} \cdot  [ \frac{ ( \frac{1+ \delta}{2})^{  \lceil \varepsilon y \rceil -1  } \cdot   \lceil \varepsilon y \rceil  + ( \frac{1+ \delta}{2})^{  \lceil \varepsilon y \rceil  }  \cdot \frac{1}{1- \frac{1+\delta}{2}}}{( \frac{1+ \delta}{2})^{-1}-1   } - \varepsilon y \cdot  ( \frac{1+ \delta}{2})^{ \lceil \varepsilon y \rceil } \cdot \frac{1}{1- \frac{1+\delta}{2}}]\\
& = 2 \cdot \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{  \lceil \varepsilon y \rceil  -\varepsilon y -1} \cdot [ \frac{2 \cdot \lceil \varepsilon y \rceil }{1- \delta} - \frac{2 \varepsilon y}{ 1- \delta} + \frac{2(1+ \delta)}{(1- \delta)^2}]\\
& = O(\frac{1}{\varepsilon} \cdot   \frac{1}{(1- \delta)^2}) \\
\end{align*}


\begin{align*}
&  \sum \limits_{k = - \infty}^{\infty} \Pr \limits_{ \textbf{r} \leftarrow \textsf{BCL} (\delta, b, n) } [  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D, f; \textbf{r}) = \frac{k}{ \varepsilon}] \cdot | \frac{k}{ \varepsilon} - y | \\
& \leq  \sum \limits_{k = -\infty}^{\lfloor \varepsilon y - \varepsilon \rfloor} 2 \cdot (\frac{1+ \delta}{2}  )^{  \varepsilon y  -k -\varepsilon}  \cdot (y- \frac{k}{\varepsilon}) +  \sum \limits_{k = \lceil \varepsilon y \rceil    }^{ \infty}  2 \cdot (\frac{1+ \delta}{2}  )^{k- \varepsilon y} \cdot (  \frac{k}{ \varepsilon} - y  ) + 2 \cdot (\frac{1+ \delta}{2}  )^{ \varepsilon y - \lceil \frac{2 \varepsilon y - \varepsilon }{2}\rceil  } \\
& ~~~\cdot (y - \lceil \frac{2 \varepsilon y - \varepsilon }{2}\rceil \cdot \frac{1}{\varepsilon}) +  2 \cdot (\frac{1+ \delta}{2}  )^{- \varepsilon y + \lfloor \frac{2 \varepsilon y - \varepsilon }{2} \rfloor + \varepsilon } \cdot (y - \lfloor  \frac{2 \varepsilon y - \varepsilon }{2} \rfloor \cdot \frac{1}{\varepsilon})\\
& = 2 \cdot  \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{-\varepsilon y -1} \cdot \sum \limits_{k =\lceil \varepsilon y \rceil}^{\infty} ( \frac{1+ \delta}{2})^k \cdot (k- \varepsilon y) \\
&= 2 \cdot  \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{-\varepsilon y -1} \cdot  [\sum \limits_{k =\lceil \varepsilon y \rceil}^{\infty} ( \frac{1+ \delta}{2})^k \cdot k -  \sum \limits_{k =\lceil \varepsilon y \rceil}^{\infty} ( \frac{1+ \delta}{2})^k \cdot  \varepsilon y ]\\
& =  2 \cdot \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{-\varepsilon y -1} \cdot  [ \frac{ ( \frac{1+ \delta}{2})^{  \lceil \varepsilon y \rceil -1  } \cdot   \lceil \varepsilon y \rceil  + ( \frac{1+ \delta}{2})^{  \lceil \varepsilon y \rceil  }  \cdot \frac{1}{1- \frac{1+\delta}{2}}}{( \frac{1+ \delta}{2})^{-1}-1   } - \varepsilon y \cdot  ( \frac{1+ \delta}{2})^{ \lceil \varepsilon y \rceil } \cdot \frac{1}{1- \frac{1+\delta}{2}}]\\
& = 2 \cdot \frac{1}{\varepsilon} \cdot  ( \frac{1+ \delta}{2})^{  \lceil \varepsilon y \rceil  -\varepsilon y -1} \cdot [ \frac{2 \cdot \lceil \varepsilon y \rceil }{1- \delta} - \frac{2 \varepsilon y}{ 1- \delta} + \frac{2(1+ \delta)}{(1- \delta)^2}]\\
& = O(\frac{1}{\varepsilon} \cdot   \frac{1}{(1- \delta)^2}) \\
\end{align*}

The   ??  ?? equality follows from the following claim.

\begin{claim}

Assume that $ 0 < a < 1$ and $l_0 \in \mathbb{Z}$. Then $ \sum \limits_{l=l_0}^\infty l \cdot a^l =  \frac{l_0 \cdot a^{l_0}}{1-a} + \frac{a^{l_0 +1}}{(1-a)^2}$ and
$ \sum \limits_{l= - \infty}^{l_0} l \cdot a^l =  ?$.
\end{claim}
}

\begin{align*}
&  \sum \limits_{k = - \infty}^{\infty} \Pr \limits_{ \textbf{r} \leftarrow \textsf{BCL} (\delta, b) } [  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D_1, f; \textbf{r}) = \frac{k}{ \varepsilon}] \cdot | \frac{k}{ \varepsilon} - y | \\
& \leq  \sum \limits_{k = -\infty}^{0}  2 \cdot (\frac{1+ \delta}{2}  )^{ |  \varepsilon y  -k | -1}  \cdot |y- \frac{k}{\varepsilon}| +  \sum \limits_{k = 1}^{ \infty}  2 \cdot (\frac{1+ \delta}{2}  )^{|k- \varepsilon y| -1} \cdot | \frac{k}{ \varepsilon} - y |   \\
& \leq  \frac{2}{ \varepsilon} \cdot (\frac{1+ \delta}{2}  )^{-1} \cdot [ \sum \limits_{k = 1}^{ \infty}  (\frac{1+ \delta}{2}  )^{k-1} \cdot k +  \sum \limits_{k = -\infty}^{0}  (\frac{1+ \delta}{2}  )^{-k } \cdot (-k+1)    ]\\
& =   (\frac{1+ \delta}{2}  )^{-1} \cdot \frac{4}{\varepsilon} \cdot \frac{1}{1- ( \frac{1+ \delta}{2}  )^2} = O(\frac{1}{\varepsilon} \cdot \frac{1}{1- \delta}).\\
\end{align*}

Similarly, we get that

$$ \sum \limits_{k = - \infty}^{\infty} \Pr \limits_{ \textbf{r} \leftarrow \textsf{BCL} (\delta, b) } [  \overline{M} ^{    \textsf{CBCLCS} }_{\varepsilon }(D_2, f; \textbf{r}) = \frac{k}{ \varepsilon}] \cdot | \frac{k}{ \varepsilon} - (y-1) |  \leq   O(\frac{1}{\varepsilon} \cdot \frac{1}{1- \delta}).   $$

When $\delta =0$ and $b=0$, the BCL source degenerates into the uniform source.

Therefore, the mechanism $\overline{M} ^{  \textsf{CBCLCS} }_{\varepsilon }$ has
$( \mathcal{BCL} ( \delta, b),   O(\frac{1}{\varepsilon} \cdot   \frac{1}{1- \delta}))$-utility and $(\mathcal{U}, O(\frac{1}{\varepsilon}))$-utility.

\end{proof}



\section{Revisions of Lemma A.1  in \cite{DLMV12}} \label{modification}





 Recall that  Lemma  A.1. of \cite{DLMV12}  and partial proof are as follows.

\begin{lemma}
{\slshape For all $y, k \in \mathbb{Z}$,   $ \frac{|I_y'(k)|}{|I_{y-1}(k)|}  \leq 6 \varepsilon$.

}

\end{lemma}


\begin{proof}

$  \ldots  $


$\textsf{Case 3}$:  If $ s_y(k-1) <   s_{y-1}(k-1) <   \frac{1}{2} \leq  s_{y-1}(k-1)$, then
$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} \leq \frac{1- e^{-\varepsilon}}{ 2 (e-1)}$.


$  \ldots  $

\end{proof}


There are several points to be noted.

1.  Compared with the above lemma, ours is much better. In fact, our upper bound  is tight.

2.  It's obvious that `` $ s_{y-1}(k-1) <   \frac{1}{2} \leq  s_{y-1}(k-1)$'' never holds. It must be a typo.

3.    ``$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} \leq \frac{1- e^{-\varepsilon}}{ 2 (e-1)}$'' is wrong!
Since $ -1 - \frac{1}{ \varepsilon} \leq  v < -1$, without loss of generality,  assume that $  \frac{1}{  \varepsilon} $  is an even integer and  $ v =  -1 - \frac{1}{ 2 \varepsilon}$.
Then  $$ \frac{|I'_y(k)|}{|I_{y-1}(k)|}  = \frac{e^\varepsilon -1}{ 2 \cdot e^{- \varepsilon v} - e^{-2 \varepsilon v - \varepsilon -1} -e^\varepsilon  } =  \frac{ 1- e^{-\varepsilon}}{ 2(e^{\frac{1}{2}} -1) } >  \frac{1- e^{-\varepsilon}}{ 2 (e-1)},$$  which stands in contradiction to the inequality
$ \frac{|I'_y(k)|}{|I_{y-1}(k)|} \leq \frac{1- e^{-\varepsilon}}{ 2 (e-1)}$.



\section{The Concept of SV Sources }  \label{SVdefinition}

\begin{definition}

 (\cite{SV86}) {\slshape
 Let   $x_1, x_2, \ldots$  be a sequence of Boolean random
variables and $ 0 \leq \delta < 1$.   A probability distribution $ X  = x_1 x_2   \ldots $  over $  \{0, 1\}^* $
is a  $ \delta$-Santha-Vazirani (SV) distribution, denoted by $SV(\delta)$,  if for all $i \in \mathbb{Z}^+$ and for every string $s$ of length $ i-1$,
we have $ \frac{1-\delta}{2} \leq \Pr[x_i=1 \mid x_1 x_2  \ldots x_{i-1} = s] \leq \frac{1+ \delta}{2}.$

We define the $\delta$-Santha-Vazirani source $\mathcal{SV}(\delta)$ to be the set of all $ \delta$-SV distributions.
For  $SV(\delta) \in \mathcal{SV}(\delta)$,  we define $SV(\delta, n)$  as  $SV(\delta)$
restricted to the first $n$ coins $ x_1 x_2  \ldots x_n$. We let $\mathcal{SV}(\delta, n)$ be the set of all distributions $SV(\delta, n)$.
}

\end{definition}















\end{document}
