% !TEX root = main.tex
\section{Design of NDNS}\label{sec:design}
This section presents the NDNS design. At the beginning of this section,
we gives an overview of NDNS.
Then NDNS naming convention, query, update and trust model are explained, respectively.

\subsection{NDNS Overview} \label{sec:overview}
There are different kinds of servers in the system: name server, caching resolver and stub resolver.
%Basically, those concepts are inherited from traditional DNS \cite{mockapetris1987domain}.
The servers are connected via NDN network, as shown in Figure~\ref{fig:ndnsoverview}.

There are two pairs of NDNS messages, i.e., Query and Response, Update and Result.
Query represents a question to zone, and Response is answer to the question.
Update intents to change dataset in zone, and Result is the result that name server hands the Update message.
Query and Update are carried by NDN Interest, while Response and Result are carried by NDN Data.

There are two kinds of Response, NDNS-RESP and NDNS-NACK.
NDNS-RESP Response is positive answer to Query, encapsulating the requested content; while NDNS-NACK is negative, indicating the non-existence of  requested data.
RR is encapsulated in Response,
and a specific RR may be split to multiple Response if it outsizes the upper bound of one NDN Data.
NDNS does not have any constrain to the format or content of RR, except for two pre-defined standard RRs: NS and ID-CERT.

NS RR is defined to be the referral to name servers in children zone.
A specific zone should stores NS RR for all of its children zone.
For example, root zone should contain a NDNS-RESP NS RR pointing to zone /ndn if it exists.
For Queries to any non-existing child zone, name server answers with NDNS-NACK.
However, if zone /ndn does not exist but /ndn/ndnsim does, root zone should store NDNS-RESP NS RR for /ndn/ndnsim;
and in this case, root zone should also store a special NDNS-NACK stating that zone /ndn does not exist,
but a zone associated with sub-namespace of /ndn exists.
This kind of NDNS-NACK is called NDNS-AUTH.
%Once receiving NDNS-AUTH, end consumers should not ask the ultimate question,
%but keep asking the known zone for zone boundary with longer label, e.g., /NDNS/ndn/ndnsim/NS.
%For current implementation, NS RRs contain nothing in its content field, but it is able to contain the routable identifier of the name servers if necessary.

\begin{figure}[h]
  \begin{center}
    \includegraphics[width=6.5in]{figures/ndns-overview2.pdf}
    \caption{NDNS Overview. NDNS maintains hierarchical namespace; Name server, caching resolver and stub resolver are connected via NDN; End consumer pre-installs trust anchor. Only authorized identity is able to send Update; Zones answer Query with Response, and Update with Result.}
    \label{fig:ndnsoverview}
  \end{center}
\end{figure}
ID-CERT RR is used to store certificates, whose format is defined in NDN security library \cite{fmt:seclib}.
The keys, used to sign the Data stored in NDNS, must be certified,
and the certificates must be stored in NDNS.

Any end consumer could fetch RR from NDNS. With certificates stored in NDNS and pre-install trust anchor, a authentication chain from the trust anchor to fetched RR.
Authorized identity sends Update which contains signature signed by its private key to destination zone.
The destination zone should verify the message before update the dataset.

%Every zone runs at least two name servers to store RRs and answer queries.
%NDNS Queries arrives at name server through NDN network.
%Stub resolver is the end consumer who really consume RRs,
%and caching resolver aggregate queries from stub resolver, fetches requested RRs with iterative process, and caches RRs.

\subsection{Naming Convention} \label{sec:name}
Names in NDNS are just normal NDN Name described in \cite{ndnfmt:name};
However, the different components of NDNS name is application level design.

The Query/Response name is made up of zone name, application tag, label, type and two optional components, version number and segment number as defined in Table \ref{tab:nameconv},
and an example shown in Figure \ref{fig:nameexample}.

\begin{table}[http]
  \begin{center}
    \caption{NDNS Naming Convention}\label{tab:nameconv}
    \begin{tabular}{lllll}
      \toprule[1.5pt]
      & Name Component & Data Type \\
      \midrule[1pt]
      NDNS Name ::= & (1) Zone Name & NDN Name\cite{ndnfmt:name} \\
      & (2) Application Tag & NameComponent\cite{ndnfmt:name} \\
      & (3) Label & NDN Name\\
      & (4) Type & NameComponent \\
      & (5) Version Number ? & NameComponent \\
      & (6) Segment Number ? & NameComponent \\
      \bottomrule[1.5pt]
    \end{tabular}
  \end{center}
\end{table}

\begin{figure}[hHtb]
  \begin{center}
    \includegraphics[width=5.5in]{figures/ndns-naming-extensions.pdf}
    \caption{NDNS Query/Response Naming Example}
    \label{fig:nameexample}
  \end{center}
\end{figure}

The zone name part indicates the zone that stores the data.
Application tag, ``NDNS'', is used by application to de-multiplex packets.
There are two types of queries in NDNS: iterative and recursive queries,
distinguished by different application tags, ``NDNS'' for iterative, and ``NDNS-R'' for recursive.
Name servers announce name prefix made up of zone name and application tag.
Label is the next name to be resolved, locating after zone name in the target domain name.
In our example, label ``/www'' is after ``/net/ndnsim'' in the target domain name /net/ndnsim/www.
Type is the resource type target RR.
Version number and segment number indicate the data version and segment respectively.
Query whose name does not contain version number and/or segment number explicitly intents to request the latest version and/or the first segment, including the Response without segment number.

The Update/Result name is a variant of the above naming schema,
by changing the Label field to the wire format of new Response,
and Type field to another application tag, ``UPDATE''.

\subsection{NDNS Query}
Iterative query and recursive query is designed for different goals.
Iterative query refers the client to another server and lets the client pursue the query.
While recursive query pursues the answer for the client at another server.

\begin{figure}[H]
  \begin{center}
    \includegraphics[width=5.5in]{figures/ndns-query-overview.pdf}
    \caption{NDNS Query Example. Stub Resolver sends recursive query to caching resolver, which then performs a serial of iterative queries following the name hierarchy of target domain name. Queries can be split into two stages: 1) Query NS RR to detect the destination zone, 2) then ask the final question}
    \label{fig:ndnsquery}
  \end{center}
\end{figure}

Figure \ref{fig:ndnsquery} shows the example of NDNS querying the TXT RR associated with domain name /net/ndnsim/www.
Recursive Query is used by stub resolver (most commonly implemented within the kernel providing a system call API).
A stub resolver sends a recursive query to caching resolver,
which accepts the recursive query and performs a series of iterative queries to get the final answer.
Name server answers iterative query according to RR in its database.
Once caching resolver gets the final iterative query response,
it constructs the recursive query response which embeds the final iterative response inside.

It needs minimal prior knowledge, i.e., the routable identifier of at least one root server
\footnote{Here we take LINK and Map-and-Encap solution\cite{maen2015} into consideration; name servers announce zone name, but LINK object may be needed for name servers to achieve accessibility in NDN},
for a server to discover RR by performing a serial iterative queries.
The serial of iterative queries in NDNS can be split to two stages:
The first stage is to find the destination zone that requested data may resident by requesting a serial of NS RRs following the hierarchy of domain name.
The second stage is to ask the ultimate question to the destination zone.

As the example shown in Figure~\ref{fig:ndnsquery}, the iterative name resolution begins at the root zone.
The first Request /NDNS/net/NS is sent to root zone,
asking whether zone /net exists.
If NDNS-RESP Response is replied, this means zone /net exists.
%The intermediate results are used to determine the next level of the query.
And LINK objects\cite{maen2015} is encapsulated in the NS RR if it is necessary to access name servers of zone /net.
Then the next Request, /net/NDNS/ndnsim/NS is sent to zone /net.
This iterative process continues until /net/ndnsim/NDNS/www/NS gets a NDNS-NACK Response (not NDNS-AUTH) if there is no further namespace delegation.
As yet the destination zone is detected and first stage is finished. Then the ultimate question, /net/ndnsim/NDNS/www/TXT is sent.

Note that the first stage will not stop if a NDNS-AUTH is returned since it indicates there is further namespace delegation.

%The work-flow is a stub resolver sends a recursive query to caching resolver,
%who will start a sequence of iteratively query to and return the final result to the stub resolver.
%The recursive strategy offloads the processing of DNS query to recursive resolver.


\subsection{NDNS Update}
%When RR content changes, the corresponding entry in the NDNS must be updated.
Updating is started by the authorized identity, who creates the new Response, signs it, embeds it in a Update message and sends the Update to destination zone,
as shown in Figure \ref{fig:clientupdate}.

\begin{figure*}[t]
\centering
\subfloat[Authorized Identity Sends Update] {\label{fig:clientupdate}
\includegraphics[width=0.48\textwidth]{figures/dynamic-update-client.pdf}}
\subfloat[Name Server Handle Update] {\label{fig:serverupdate}
\includegraphics[width=0.52\textwidth]{figures/dynamic-update-generation.pdf}}
\myvskip
\end{figure*}

On server side, once name server receives an Update, it extracts the Response message from Update message,
verifies it,
and adds it to local database,
or replaces existing entry in the dataset as shown in Figure \ref{fig:serverupdate}.


Although the processes of update on both side are straightforward,
security must be handled very carefully.
Name server should verify the Update according to the following rules:
1) The Update is generated by the authorized identity;
2) The Update is not duplicated copy or due to replay attack;
3) The embedded Response is generated by the authorized producer;
4) The embedded Response does not violate immutable data model, which means an existing RR cannot be replaced by another RR with same version number but different content.

To achieve the above rules,
The authorized identity must have a valid certificate,
and signs the embedded Response, and the Update message as well.
Name servers only accepts verified Update message, and discard Update that cannot pass verification. Name server returns the Result after handle the Update.

%When an Update request embeds a NDNS-NACK response, it intents to remove an existing record.
%But in order to prevent Replay attack, name servers will store the NDNS-NACK response until the its signing key or certificate expires.

%\subsubsection{Remove RR from Name Server}
As a special update scenario, to remove a specific RR at name servers,
authorized application sends an Update message embedding a NDNS-NACK.
Here we do not design a implicit remove command for simplicity,
by keeping Update message format and processing procedure unified.
%Another Benefit is that the NDNS-NACK could reply future requests for the RR.

%\subsubsection{Prevent Replay Attack}
Attacker may replay Update message to fool NDNS.
For example,
if attackers caches two Update messages, one stores RR in NDNS, while the other remove it from NDNS,
he/she may try to manipulate the RR by sending corresponding Update messages.

A way to prevent the potential replay attack is to append a ``Update Version Number'' for every Update at end client side,
and keep track of the update version number at the name server side,
even if the RR is ``removed'' by authorized identity.
Thus, we stores the embedded NDNS-NACK in remove message immediately, instead of removing the entire record.
The entire record could be totally removed when the signature of Update expires, or the certificate of key signing the Update expires.

Note that for RR which is segmented to multiple Response, each Response is embedded in one Update.

\subsection{Zone Synchronization}
\begin{comment}
When there is update, content inconsistent may occurs.
When the update happens, some existing cached copy in NDN network may leads to inconsistency.
Fundamentally, it is merely possible to eliminate content inconsistent in a distributed system.
However by setting proper freshness period of RR, the side-effects could be minimized.
Evaluation in traditional NDNS\cite{jung2002dns} shows that reducing the content freshness period of records to as low as a few hundred seconds has little adverse effect on caching hit rate.
What's more, NDN Interest supports ``selector'' \cite{ndnfmt:selector},
such as MustBeFresh and Exclude fields, which could be used to bypass the outdated copies.
\end{comment}

Zone associated with a specific domain name may be distributed by multiple name servers,
and the zone instances at different servers should keep identical.
But those zone instances differ when one specific name server get Update, while the rest do not.
The same situation happens when administrator manipulates RR at one specific name server.

In this case, zone synchronization is required.
We propose a fully distributed zone synchronization approach using ChronoSync\cite{zhu2013let}.
ChronoSync builds a broadcast group,
which includes all name servers of specific zone as members,
Every member keeps historical and latest state of zone individually by hashing the its local dataset in the zone.
When all members share latest zone state, zone instances are identical.
If any zone instance is changed,
the name server sends a Interest containing its latest state to the broadcast group,
which signals all the other members to fetch the update,
and make all members achieve identical zone state again.
This decentralized design removes both single point of failure and traffic concentration problems commonly associated with centralized implementations.
Even if the network is partitioned, nodes in every separated partition is able to synchronize their state;
and after the network recovers, all the nodes could synchronize with each other based on their latest state.
Since zone synchronization mechanism, including naming, interactive process, etc., is designed to be isolated with NDNS query and update,
and to focus on core design of NDNS, we does not document the detail of zone synchronization in this report,
and readers can find detailed design here\cite{shock:ndns-zone-sync}.

\subsubsection{Zone Synchronization Based Update}
Zone synchronization signals members when zone changes.
This mechanism could be utilized by authorized identity to signal name server to fetch update.
In this case, authorized identity must join the broadcast group,
act like a normal member,
and send broadcast Interest to trigger ``real'' name servers to fetch update.

Zone based synchronization is a pull-style communication,
This approach makes fully use of cached data,
especially when the state change message signal all name servers to fetch some data simultaneously.

However, this approach has some pre-conditions:
1) The authorized identity is also privileged to join name server synchronization broadcast group.
2) The authorized identity is routable via the destination zone name, at least with the help of LINK object.

\subsection{Trust Model}
\begin{comment}
In NDNS, the publisher is named with the content namespace that it is in charge of.
For example, the publisher who is the principal of all the data under namespace /com/google/www
is named to be /com/google/www.
This naming convention is the human nature way in reality,
with which, human could map the identity name to the real service provider,
e.g., human bind the name /com/google/www, but not /com/gogle/www, to the famous search engine provided by Google Inc. in their mind by learning it in daily lives.
\end{comment}
%NDNS inherits naming of keys and certificate from NDN security library\cite{fmt:seclib}.
Every zone contains two layered keys in NDNS, Key Signing Key (KSK) and Data Signing Key (DSK) as defined in NDN security library\cite{fmt:seclib}.
KSK is used to certify DSK inside a zone by issuing and storing an certificate.
As to KSK, except for the root zone's KSK, is in turn certified by DSK of its parent zones.
%A zone publisher could store the certificate at its name server, while a non-zone publisher stores the certificate at its parent zone.
Issuer of the certificate could be inferred from certificate' KeyLocator field,
and owner of the corresponding key could be inferred from certificate' packet name,
and key bits together with other information could be extracted from certificate' Content Field.

\begin{figure}[H]
  \begin{center}
    \includegraphics[width=6.5in]{figures/ndns-delegation-example.pdf}
    \caption{NDNS Delegation Example. DSK is certified by KSK inside a zone; KSK except for the root key (trust anchor) is certified by DSK of parent zone. Authentication Chain can be constructed from trust anchor to any RR}
    \label{fig:delegation}
  \end{center}
\end{figure}

NDNS adopts a hierarchical trust model.
As shown in Figure \ref{fig:delegation},
root zone certifies its DSK with the root key;
then the DSK is used to certify KSK of zone /net;
then zone /net certifies its DSK with its KSK.
With the same logic, zone /net/ndnsim certifies its KSK and DSK.
Assume that another identity (but not a zone), /net/ndnsim/www,
whose identity is certified by zone /net/ndnsim,
stores its certificate of KSK and DSK,
together with a TXT RR signed by its DSK at zone /net/ndnsim.
In this way, authentication chain is constructed from the root key to the TXT RR.

End consumer could verify the TXT RR by fetching the certificates along with the reverse order of the chain.
Furthermore, all the RRs (TXT and ID-CERT) on the trust chain, could be verified,
except for the root key,
which is the trust anchor of the whole NDNS system
and should pre-install on consumer's side.

Besides the TXT RR stored in NDNS, any data packet signed by any key whose certificates stored in NDNS could be verified by constructing the hierarchical authentication chain from trust anchor to the packet.

\cite{smetters2009securing} argues that there are three steps to secure the network content:
1) verifying that a given name-content mapping was signed by a particular key;
2) determining something about who that key belongs to – who, or publisher, in user terms, published that content;
and 3) deciding whether or not that is an acceptable source for this particular content and the use to which it is to be put.

Signature Field in Data packet is designed for the first step.
Certificates stored in NDNS provide the ability to bind a producer (identity owning the certificates) with its public keys, which achieves the second step,
%,since the identity who owns the certificates could be implied by the name of certificate
But as to the third step, it is the applications which can make such decisions according to their own security consideration.
It's worth noting that NDN allows applications to specify their own verification policies.

%A usage of NDNS is to prove the namespace ownership and legally associate the legal keys


%Any Data packet signed by the key whose certificate is stored in NDNS,
%could also be verified to the second step described above.
%And again, the third step should be defined by the end applications.

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End: