\documentclass[11pt]{paper}
\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.
\geometry{letterpaper}                   % ... or a4paper or a5paper or ... 
%\geometry{landscape}                % Activate for for rotated page geometry
%\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{epstopdf}
\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}

\usepackage{booktabs}
\usepackage{url}
\usepackage{cite}\renewcommand\citeleft{(} \renewcommand\citeright{)}

\usepackage{booktabs}
% \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}}
\newcommand{\mytoprule}{\specialrule{0.1em}{0em}{0em}}
\newcommand{\mybottomrule}{\specialrule{0.1em}{0em}{0em}}
\newcommand{\mymidrule}{\specialrule{0.05em}{0em}{0em}}
\usepackage{textcomp}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{ae}
\usepackage{aecompl}
\usepackage{graphicx}
\RequirePackage[sort&compress]{natbib}
\usepackage{natbib}
%\usepackage[a]{esvect}
\usepackage{color}
\usepackage{textcomp}
\usepackage{makecell}
\usepackage{mathptmx}
\usepackage{float}



\title{An information theoretic approach to the effective number of codons:
implications for genes and genomes}
%\author{The Author}
%\date{}                                           % Activate to display a given date or no date

\begin{document}
\maketitle

\section{Abstract}
%This paragraph is an attempt to follow the Nature template of how an abstract
%should look. Which is why I broke it up into individual sections for now.
The degeneracy of the genetic code is such that as many as 6 different
nucleotide-triplets (codons) may code for the same amino acid. Most codons are
not used randomly, and codon usage biases are useful for predicting a variety of
valuable information including gene expression and microbial growth rates.

Here, we utilize the principles of information theory to develop an orthogonal
approach to computing one of the most popular and highly cited tools to analyze
codon usage bias, the Effective Number of Codons ($\hat{N}_c$). 

First, we illustrate several of the conceptual problems associated with the
original formulation of the $\hat{N}_c$ and show how previously proposed fixes fall
short. Second, we show how an information theoretic approach can be applied to
overcome all of these theoretical shortcomings. Finally, we validate the
practical utility of our methodology by showing that we can improve the statistical
significance of published published findings for a variety of inter- and
intra-organismal data sets. 

We anticipate that the findings that we present here will spur
further research into the mechanistic origins of codon usage bias within and
between organisms. The improved sensitivity and accuracy of our
method will be increasingly useful as genome sequences and expression level data
rapidly accumulate.

\section{Introduction}
The degeneracy of the genetic code is such that as many as 6 different codons
can be used to specify particular amino acids in protein coding genes. These
codons are not used evenly, and countless researchers have noted that codon
usage bias varies both between organisms, between genes from the same organism,
and even within genes (CITE). As sequencing technologies continue to decrease in
cost, tools to extract meaningful information from sequencing data will become
increasingly important to help identify organisms in microbial communities
(CITE), horizontally transferred or nascent genes within an organism (CITE), and
the most highly expressed/functionally important genes within an organism
(CITE). A more thorough understanding of the latter also has significant
consequences for recombinant protein technologies, offering the potential to
increase yields and decrease costs by exploiting the optimal coding strategies
for a particular organism (CITE). 

There are a variety of tools that have been developed to assess codon usage
bias, and each one has certain advantages and disadvantages. One of the most
widely used and cited tools is known as the `effective number of codons'
($\hat{N}_c$) (CITE).  The primary advantage of $\hat{N}_c$ is that its implementation
requires no a priori knowledge. Unlike the Codon Adaptation Index (CAI) (CITE), another
highly used metric which requires a previously defined reference set of highly
expressed genes, or the tRNA Adaptation Index (tAI) (CITE), which requires knowledge of
the tRNA gene copies and modification patterns, $\hat{N}_c$ is a simple way to
assess deviation from random codon usage for a particular gene or genome. It's
simplicity makes it amenable to a variety of uses which over just the last
several years have included: WRITE MORE HEREEEEEE (CITE) (CITE) (CITE). 

However, the $\hat{N}_c$ has several well-known technical shortcomings. This is
important because this is a well-used tool and any biases or information loss
may ultimately limit the practical utility of the methodology by decreasing the
sensitivity and thus the ability to detect valuable biologically relevant
associations. Towards this end, many shortcomings have been addressed over the
years by various researchers, but here we show that each of the proposed fixes
only addresses a subset of the myriad of theoretical and practical problems
associated with the $\hat{N}_c$. Further, most proposed fixes address theoretical
shortcomings and fail to rigorously validate the degree to which these fixes
increase practical utility. 

Here, we apply the concepts of information theory to the problem of calculating
$\hat{N}_c$ and show that an analytical tool built on the concepts of
information theory provides a principled, statistically rigorous approach for
calculating codon usage bias in a way that is directly comparable to
$\hat{N}_c$.  Our proposed method has the advantage of resolving several long
standing problems with the $\hat{N}_c$ calculation, and also has practical
utility, which we demonstrate by applying our predictions to 13 different
proteome level datasets as well as to predictions of microbial growth rates for
greater than 200 species. 

\section{Results} 
\subsection{Background and current state} 
We are interested in quantifying synonymous codon usage bias for a given gene or
gene-sets. The methodology of Wright defines this calculation at the individual
amino acid level as:

\begin{equation}
\label{eq:original_eq}
\hat{N}_c = \frac{n-1}{ (n \sum_{i=1}^{k} p_i^2) -1 }
\end{equation}

\noindent
where $k$ is the number of distinct codons that code for a particular amino
acid, $n$ is the total number of occurrences of the amino acid of interest
within the dataset, and $p_i$ is the individual codon probability relative to
all synonymous variants for the particular amino acid. For small $n$ this
formulation can yield strange and undefined results as pointed out most recently
by Sun et al. Briefly, imagine a gene with 4 occurrences of the
lysine amino acid, 2 of which are AAA and 2 of which are AAG. By
Eq.~\ref{eq:original_eq}, $n = 4$, $p_{AAA} = 0.5$, and $p_{AAG} = 0.5$ resulting in
an $\hat{N}_c$ of $\frac{3}{1} = 3$.  Of course, we expect this answer to be two, and
also to have a maximum value of two as there are only two possible codons that
code for lysine. This is an admittedly small sample, but even if $n = 10$ the
result of even codon usage would still paradoxically equal $\frac{9}{4} = 2.25$.
We are far from the first researchers to notice this, indeed this fact was
noted in the original description of the $\hat{N}_c$. To deal with these cases
however, practical implementations rely on several arbitrary and ill-defined
heuristics that include omitting amino acids entirely if they have five or fewer
occurrences and rescaling the $\hat{N}_c$ of any individual amino acid such that
$\hat{N}_c = k$ in the event that the calculations return $\hat{N}_c > k$, as happened in our
simple description.

Another source of problems with the $\hat{N}_c$ stems from the fact
that we are often interested in aggregating the information from all 20 amino
acids into a single number for a gene or genome. The original formulation by
Wright and most derivations rely on grouping amino acids together into $k$-fold
redundancy classes (i.e. all amino acids with two codons, etc.), averaging the
inverse of the $\hat{N}_c$ (denoted $\hat{F}$) within a redundancy class,
multiplying this value by the number of amino acids in that class (i.e. nine for
the number of amino acids coded for by two codons, one for the number of amino
acids coded for by three codons under the standard genetic code, etc.), and
adding the results from each class:

%We need the equation here
\begin{equation}
\hat{N}_c = 2 + \frac{9}{\bar{\hat{F}}_2} + \frac{1}{\bar{\hat{F}}_3} +
\frac{5}{\bar{\hat{F}}_4} + \frac{3}{\bar{\hat{F}}_6}
\end{equation}

\noindent
To illustrate why this is problematic, imagine that lysine is used 50 times within a
single gene in a highly biased manner (45 and 5 for AAG and AAA) and
phenylalanine is used only 6 times but in a uniform manner within that same gene
(3 and 3 for TTT and TTC). Our goal is to quantify the level of bias in this
gene, and it thus makes intuitive sense that far more priority be given to the
observations of lysine in the final averaging process resulting in a much lower
$\hat{N}_c$ value for this hypothetical sequence. However, $\hat{N}_c$ simply averages the two
amino acids resulting in a moderate amount of bias.

Finally, there is another inherent and crucial disadvantage to the $\hat{N}_c$
calculations presented above. Namely, organismal GC content is highly
variable and for either high or low GC genes/genomes, the \emph{expected}
$\hat{N}_c$ is lower given the constraint of nucleotide usage but it is
debatable whether this reflects interesting codon usage biases. For instance, to
resort to our previous example: suppose we have observed 10 instances of lysine
in a gene of which 3 are AAA and 7 are AAG. Clearly, there is a deviation from
uniform usage, but whether that deviation is exceptional might depend on whether
the background GC content of the gene/genome is 70\%. In such a case, the
proportions do not seem particularly biased from random expectation. On the
other hand, if the background GC content of the gene/genome is 40\%, the
observed proportions are very anomalous.  

\subsection{Modern version from Novembre} 
Several post-hoc rescaling calculations have tried to correct the GC content
problem, but it was addressed most formally and skillfully by Novembre (CITE)
who used the $\chi^{2}$ statistic to quantify codon deviation from expected
usage for each amino acid and unified this approach with the $\hat{N}_c$,
resulting in the metric $\hat{N}_c^{\prime}$. However, while correcting for GC content, the
approach of Novembre still unfortunately suffers from many of the same problems
as the original formulation such as handling sparse data with a variety of
ill-defined heuristics and treating all amino acids equally regardless of sample
size. A full summary of this methodology can be found in the Methods.

\subsection{Modern version from Sun et al.} 
The most recent update to the $\hat{N}_c$ comes from Sun et al. who start by
simplifying the equation for an individual amino acid such that: 

\begin{equation}
\label{eq:arit_mean}
\hat{N}_{c, Sun} = \frac{1}{ \sum_{i=1}^{k} p_i^2}
\end{equation}

This equation gives the expected result of $\hat{N}_c = k$ for un-biased cases
and has a more intuitive physical basis: it can be physically interpreted as the
inverse of the probability of randomly sampling the same codon twice given the
observed usage. To deal with the problem of small sample sizes the
authors proposed to include pseudo counts into the calculation for each amino
acid. Additionally, during the process of aggregating different amino acid level
metrics into a single number, Sun et al. take a weighted average of the
$\hat{F}$ values within each $k$-fold redundancy class according to the number
of occurrences of each amino acid in the data set. This approach should
alleviate the problem of differential amino acid signals but however only
applies within $k$-fold redundancy class. Lastly, as a relatively minor point
that we will not discuss further, the authors also broke down 6-fold redundant
amino acids into two separate groups, one 4-fold redundant group and one 2-fold
redundant group. 

While this approach alleviates many problems with the original formulation, it
does not take GC bias into account and the broader applicability remains largely
untested. A full summary of this methodology can be found in the Methods.


\subsection{An information theoretic approach}
As discussed above, there are clearly several crucial limitations to the $\hat{N}_c$,
and although individual researchers have corrected specific aspects of these
limitations, no approach to date has combined these principles into a unified
framework. More importantly, neither of these methodologies has been evaluated
for its utility on a large scale. Here, we use information theory to provide
that framework and propose alternative definition that is independent of either
of the above formulations and is physically based on the number of bits that are
required to code a random variable. The Shannon information $H$ is defined as:

\begin{equation}
\label{eq:shannon_entropy}
H = -\sum_{i=1}^{k} p_i \log_2 {p_i}
\end{equation}

\noindent
We can apply this simple formula to synonymous codon usage for an individual
amino acid resulting in the effective number of codons:

\begin{equation}
\label{eq:geom_mean}
2^H = \hat{N}_{c,info} = \frac{1}{\prod_{i=1}^k p_i^2}
\end{equation}
\noindent
where as before $k$ is the number of synonymous codons for the given amino acid
of interest and $p_i$ is the probability of codon $i$. 

When compared with Eq.~\ref{eq:arit_mean}, the difference in the information
theory approach relies simply on using the geometric mean rather than the
arithematic mean. However, Eq.~\ref{eq:shannon_entropy} is the only equation
that has the additive property, a fact that has propelled its usage in a variety
of disciplines and whose implications will become evident later. This approach
also benefits from having has a direct physical interpretation: $H$ is the
number of bits of information required to code the random variable and
$\hat{N}_{c,info}$ represents the effective number of states that can be encoded
by $H$ bits of information.

As with Eq.~\ref{eq:arit_mean} (and unlike Eq.~\ref{eq:original_eq}),
Eq.~\ref{eq:geom_mean} will yield a maximum $\hat{N}_{c,info} = k$ in the unbiased case
for $k$-fold redundant amino acids regardless of $n$, as expected. We have, in
Eq.~\ref{eq:geom_mean} outlined a new interpretation of the effective number of
codons for an amino acid, but we must be able to combine contributions from
different amino acids to calculate the $\hat{N}_{c,info}$ for a gene or genome. To see
why this could be problematic, in Fig.~\ref{fig-genomeNc}A we show the $\hat{N}_{c, info}$ for each amino
acid using the entire \textit{E. coli} genome (a concatenated set of all protein
coding genes). These numbers are highly variable but difficult to compare
because the maximum for different amino acids is variable. To more easily
compare the values for amino acids of different $k$-fold redundancy classes, we
simply rescale the $\hat{N}_{c,info}$ values between 0 (complete bias) and 1 (no
bias) (see Methods) allowing for visual comparison of individual amino acids for the
entire genome Fig.~\ref{fig-genomeNc}B. 

%%%%%%%%%%%%%%%FIGURE%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure} 
\begin{center} 
\includegraphics[width=3in]{Figs/encFig2.pdf}
\caption{blah blah look how pretty i am.} 
\label{fig-genomeNc} 
\end{center} 
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Further, in Fig.~\ref{fig-genomeNc} we illustrate $\hat{N}_{c,info}$ for every amino acid (columns) for genes in
\emph{E. coli} (rows, arranged top to bottom according to transcript abundance).
When presented this way, it becomes clear that bias in codon usage for some
amino acids correlates well with expression while for other amino acids this is
demonstrably not the case.

In order to aggregate the information for a given sequence into a single
number, we first take a weighted sum of the entropy values for each amino acid:

\begin{equation}
\label{eq:geneLevelEntropy}
H_{total}=\sum_{i=1}^{20} p(aa_i)  H_{C | aa_i}
\end{equation}

\noindent
where $p(aa_i)$ is the relative frequency of amino-acid that codes for codon $i$
($c_i$). Thus, rather than not weight based on the differential abundances of
different amino acids, as in Eq.~\ref{eq:original_eq}, or only weighting only
within a $k$-fold redundancy class as in Eq.~\ref{eq:arit_mean}, our methodology
allows us to weight all amino acids to arrive at a final entropy. The
exponential of this final entropy ($2^{H}$) would be the effective number of
codons per amino acid and could be simply multiplied by the 20 amino acids.
However, this number would not tell us anything about the bias with respect to
any null model. We address this by defining:

\begin{equation} 
\label{eq:gammaGeneral}
\gamma_{sequence} =  \frac{\hat{N}_{c,info}-1}{\hat{N}_{c,info,expected}-1}
\end{equation}

\noindent
where $\hat{N}_{c,info,expected}$ is the expected $\hat{N}_{c,info}$ from an an
unbiased model. If we ignore GC content effects, this number is simply 61. Now,
$\gamma_{sequence}$ is a number between 0 and 1 representing the bias per amino
acid relative to a null model. It can be simply rescaled to between 20 and 61 to
make equivalent comparisons to the $\hat{N}_c$:

\begin{equation}
\label{eq:InfoBias}
\hat{N}_{c,info} =  20 + \gamma_{sequence}  \times (61-20)
\end{equation}

The purpose of these final normalizations allows us to compare the
$\hat{N}_{c,info}$ of the real gene/genome with the $\hat{N}_{c,info}$ of a null
model expectation for a given GC content. We can construct the maximum entropy null model for a given GC
content by considering all possible genes with the same amino-acid sequence and
exactly the same GC content, an extraordinarily large number. In practice the effective number of codons of this
model can be computed analytically using the tools of statistical physics.
However, we leave this derivation to the Methods section for those who
are interested. 

Finally, we have established a firm model that uses information theory to define
the $\hat{N}_{c,info}$. Doing so alleviates the problems of strange results and post-hoc
rescaling associated with the approach of Wright (CITE) and Novembre (CITE), the
problem of variable weighting associated with Wright (Cite) and Novembre (CITE)
(and only partially alleviated by Sun), and the confounding effects of GC
content associated with Wright and Sun. 

\subsection{Predictions and correlations with existing metrics} 
We have thus far established a novel framework for computing codon usage bias
and have outlined a number of theoretical benefits that our methodology affords
over prior methods. However, there is still the problem of which method provides `correct'
results. We thus evaluate whether our approach provides practical utility by
quantifying the degree to which different methods are able to predict transcript and protein abundances.

As can be seen in Table~\ref{table-Ecoli}, we first used our $\hat{N}_{c,info}$
to test improvements in correlations for multiple datasets of transcript and
protein abundances in \emph{E. coli}. For comparison, we also show results for
the traditional $\hat{N}_c$, the methodology of Sun et al, and the methodology
of Novembre. These results show that........... 

%%%%%%%%%%%%%%%Table%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table*} 
\label{table-Ecoli} 
\caption{Spearman's $\rho$ for different \emph{E. coli} datasets} 
\centering 
\begin{tabular}{l c | c | c | c  } 
\toprule 
& Wright & Sun et al. & Novembre & This work  
\\ 
\midrule
Transcript\textsuperscript{1} & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$
\\
Transcript\textsuperscript{2} & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$
\\
Protein\textsuperscript{1}    & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$
\\
Protein\textsuperscript{2}    & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$ & $1.0 (.0)$
\\
\bottomrule 
\end{tabular} 
\vspace{-0.2cm} 
\end{table*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

However, although E. coli is a well-studied and valuable microbe to test, we
wanted to make sure that our improvements in predictive power were robust and
that our methods were applicable to a variety of organisms. To do so
in an unbiased manner, we downloaded protein expression data for all prokaryotes
listed in the PaxDB Version 3.0. For each organism, we extracted protein
abundance data and correlated these values with the 4 methods for computing
codon usage bias that we have thus far discussed. First, it is clear that our ability to
predict protein abundances from sequence level data is highly variable and
organism dependent. Virtually all of the methods fail to produce meaningful
correlations with XXX while other organisms such as YYY and ZZZ have very large
negative correlations between protein abundances and various $\hat{N}_c$
indices. This variability may be due to a variety of factors including
differential selective pressures for translational efficiency and differential
quality of data.

However, in general our method outperforms all others. $\hat{N}_{c,info}$
performs significantly better than the methodologies of Wright and Sun et al.
(Wilcoxon signed-rank test, p=0.0147 and p=0.002 respectively). The difference
is not statistically significant compared to the methodology of Novembre (Wilcoxon
signed-rank test, p=0.192) but the median of our method is lower and our method
has the highest predictive power for the most number of organisms (xxx vs yyy
for Novembre). 

%%%%%%%%%%%%%%%Table%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table*} 
\label{table-OtherOrganisms} 
\caption{Spearman's $\rho$ for protein abundance datasets from PaxDB}
\centering 
\begin{tabular}{l c | c | c | c | c | c  } 
\toprule 
& GC\% & $n$ & Wright & Novembre & Sun et al. & This work  
\\ 
\midrule
Synechocystis & XXXX & YYYY & $-0.09$ & $-0.15$ & $-0.12$ & $-0.38$
\\
Deinococcus   & XXXX & YYYY & $-0.30$ & $-0.37$ & $-0.21$ & $-0.32$
\\
Bacillus      & XXXX & YYYY & $-0.17$ & $-0.22$ & $-0.20$ & $-0.25$
\\
Thermococcus  & XXXX & YYYY & $-0.36$ & $-0.35$ & $-0.32$ & $-0.35$
\\
\bottomrule 
\end{tabular} 
\vspace{-0.2cm} 
\end{table*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Predicting expression at the individual gene level is a useful tool, but we
wanted to test whether our $\hat{N}_{c,info}$ is better at uncovering other
complex associations. Recently, Viera-Silva et al. found that organisms with the
highest degree of codon usage bias tended to have shorter doubling times, in
effect drawing a link between selection for translational efficiency at the
genome level and reproductive capacity. The authors made this discovery by assembling a
dataset of 214 organisms and used the methodology of Novembre to define an
index:

\begin{equation}
\hat{N}_{c,diff}^{\prime}=\frac{\hat{N}_{c,ribo}^{\prime}-\hat{N}_{c,all}^{\prime}}{
\hat{N}_{c,all}^{\prime}}
\end{equation}

\noindent
where $\hat{N}_{c,ribo}^{\prime}$ is the effective number of codons calculated on
the concatenated set of ribosomal protein genes and $\hat{N}_{c,all}^{\prime}$ is
the effective number of codons calculated on the concatenated set of all genes
in the genome. 

We downloaded this dataset and calculated our own correlations using the four
different ways to calculate the $\hat{N}_{c}$ that we have discussed here. Figure~\ref{fig-growthRates}
shows the absolute value of Spearman's $\rho$ when correlating each of our four
methodologies using three slightly different metrics: $\hat{N}_{c,diff}$
(Fig.~\ref{fig-growthRates}A), $\hat{N}_{c,ribo}$ (Fig.~\ref{fig-growthRates}B), and $\hat{N}_{c,all}$
(Fig.~\ref{fig-growthRates}C). Even though GC usage alone does not correlate
well with minimum doubling times (STATISTIC HERE), it is clear that uncovering a
relationship between codon usage bias and organismal growth requires a method
that can control for GC content. Our $\hat{N}_{c,info}$ metric outperforms all
other methods, again illustrating its utility. 

%%%%%%%%%%%%%%%FIGURE%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure} 
\begin{center} 
\includegraphics[width=5in]{Figs/encFig3.pdf}
\caption{still need to make this figure} 
\label{fig-heatmapNc} 
\end{center} 
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\subsection{Extensions} 
\subsection{Differentiating between tRNA variation in codon usage versus within
tRNA variation}
The variation in codon usage bias for any organism can come from several different
sources, for which we will use alanine as an illustrative example. Alanine
has 4 synonymous codons, GCT, GCC, GCA, and GCG that are read by 2 different
tRNA species that have corresponding anticodons  GGC (decoding GCT and GCC) and
TGC (decoding GCA and GCG). These tRNAs may be present at variable 
intracellular concentrations, and codon usage is known to co-vary with tRNA
abundance. Thus, the variation in codon usage for alanine may be a consequence
of differential tRNA usage or the choice of codon given a tRNA. Suppose that the
4 alanine codons, GCT, GCC, GCG and GCA, have biased usage frequencies of 0.4, 0.4, 0.1, and
0.1, respectively, within a gene. We can group these codons according to the
tRNA species that decodes them, which we will refer to as Ala1 and Ala2, with
respective frequencies of 0.8 and 0.2. However, the frequency of GCT and GCC given Ala1 are each 0.5
and the frequencies of GCG and GCA given Ala2 are also 0.5. Thus, all of the
bias is encoded at the tRNA level and given a tRNA the choices of which codon
are essentially random in this toy example. More formally:

\begin{equation}
H(\textrm{tRNA|alanine}) = H(0.8, 0.2) = 0.72 \textrm{ bits}
\end{equation}

and 

\begin{equation}
H(C|\textrm{Ala1}) = H(0.5,0.5) = 1 \textrm{ bit}
\end{equation}

\begin{equation}
H(C|\textrm{Ala2}) = H(0.5,0.5) = 1 \textrm{ bit}
\end{equation}

The hierarchical property that is unique to Equation~\ref{eq:shannon_entropy}
means that the entropy of the individual codons is equal to the entropy of the
codons given the tRNA plus the entropy of the tRNAs. This additive property does
not apply to any other formulations such as those by Wright, Sun et al. or
Novembre. Thus:

\begin{equation}
H(C) = 1.72 \textrm{ bits}
\end{equation}

\noindent
The $\hat{N}_c$ is known to correlate with gene expression, such that genes which are
more biased in their codon usage tend to be more highly expressed. However, how
much of this bias is a result of codon matching to the tRNA pool versus matching optimal
codons for a given tRNA anti-codon is unknown.

In Fig xxxx, we break down the data from Fig yyy into 2 different components.
For each gene in the \textit{E. coli} genome, we ask: a) for three, four and
six-fold redundant amino acids, what is the effective number of tRNAs? (under
the naive asssumption that 3rd position purines and pyrimidines are
grouped separately) and b) for each tRNA in our grouping schema, what is the effective
number of codons? To maintain as much information as possible at this stage, we
again represent these numbers as vectors in a heatmap, where the genes (in rows)
are listed from highest transcript abundance to lowest transcript abundance (top
to bottom), as in fig yyy. 

It is immediately apparent from this depiction that variation in tRNA choices
are closely related to gene expression while variation in which codon is used for a
particular tRNA provides virtually no information about a genes expression
level. When we aggregate this information together, according to
Equation~\ref{eq:InfoBias} the resulting correlations show that the effective
number of tRNAs used in a gene correlate with gene expression much better than
the effective number of codons given tRNA (xxxx and yyyy, respectively). In
fact, the effective number of tRNAs correlates better than simply computing the
effective number of codons (xxx).

However, although this result is extremely strong for \emph{E. coli}, it does
not appear to perform substantially better for other organisms. IS THIS TRUE?? 


\subsection{Any other extensions to put in here?}
perhaps a plot /discussion of position dependency?


\section{Discussion} We have re-formulated the $\hat{N}_c$ using the principles
of information theory. Doing so has allowed us to determine blha blah blah


\section{Methods} 
\subsection{Summary of Novembre methodology}
Lay out the exact Novembre methodology here.

\subsection{Summary of Sun et al. methodology}
Lay out the exact Sun et al. methodology here.

\subsection{Information theoretical approach}
Make this section a more exhaustive equation based formulation of exactly what
we do.

\begin{equation} 
\label{eq:gamma} 
\gamma = \frac{\hat{N}_{c,info}-1} {max(\hat{N}_{c,info})-1}
\end{equation}

\noindent
where the maximum of $\hat{N}_{c, info}$ for each individual amino acid of
$k$-fold redundancy class is simply $k$.

\subsection{Adapting $\hat{N}_{c,info}$ to tRNA bias}
The normalized $\hat{N}_{c,info}$ can be be adapted to tRNA level bias by simply
replacing codons with tRNA in Equation.~\ref{eq:geneLevelEntropy} in order to calculate the effective number of tRNAs
per amino acid:

\begin{equation}
H_{total}=\sum_{i=1}^{20} p(aa_i)  H(tRNA | aa_i)
\end{equation}

Conversely, we can also calculate the effective number of codons per tRNA by
replacing amino acids with tRNAs:

\begin{equation}
H_{total}=\sum_{i=1}^{20} p(tRNA_i)  H( C | tRNA_i)
\end{equation}

The additivity property assures that the information needed to
encode for a codon given an amino acid is the sum of: (1) the information needed to
encode a codon given a tRNA and (2) the information needed to encode a tRNA given an
amino acid. The total information entropy for an individual codon can be thus be written as:

\begin{equation}
\label{eq:add_property}
H(C|aa) =  H(c|tRNA) + H(tRNA | aa) 
\end{equation}


where:

\begin{equation}
\label{eq:add_property2}
H(C|tRNA)= \sum_{tRNA_{aa}}  p(tRNA_{aa}) H(C|tRNA_{aa})
\end{equation}

is the average amount of information needed to specify a codon given a tRNA
(weighted by the tRNA usage probability, $p(tRNA_{aa})$).

%\begin{figure}
%\begin{center}
%\includegraphics[width=4in]{Figs/additivity.pdf}
%\caption{The effective number of states of a variable with probabilities
%($\frac{1}{4}$ and $\frac{3}{4}$) can be computed using the additivity property
%and the entropy of uniform variables. The amount of information needed to code
%for the purple nodes is $\log_2(4) = 2$ bits; given the red node on the left, no
%information is needed to code for the purple node below; given the red node on
%the right, $\log_2(3)$ bits are needed to code for the purple nodes below;
%finally the amount of information needed to specify the purple nodes -- 2 bits
%-- is the sum of the information coding for the red nodes --$H(\{ \frac{1}{4},
%\frac{3}{4} \})$ -- plus the information for the purple nodes given the red ones
%-- $\frac{3}{4} \log_2 3$ --.
%}
%\label{fig:additivity_example}
%\end{center}
%\end{figure}


\subsection{ANDREA I STOPPED EDITING HERE}

\subsection{Effective number of codons}

\textbf{In the final version, all of this will be removed, I think, but we can just keep the figure.}

The entropy $h(X)$ of a random variable $X$ measures the logarithm of the
effective number of states of $X$.

Thus, for  amino acid $A$, the effective number of its synonymous codons can be
measured as $k_A= e ^ { h(C_A)}$, where $C_A$ is one of the synonymous codons of
amino acid $A$. For measuring the probabilities of $C_A$ we use a prior of 1 per amino-acid (see Sec.~\ref{sec:priors}).
The problem is then how to define the effective number of codons for the entire
genome or gene.

A simple solution is just to take the sum over all amino acids: $K_{uw}=\sum_{A}
k_A$. However, this measure does not account for the frequencies of the amino
acids. Thus, $K_{uw}$ does not weigh the codon bias across amino acid usage.

The natural way to account for the entropy of different amino acids is to
consider the entropy rate of a Markov chain:  $h=\sum_{A} p(A)  h(C_A)$.  The
exponential of this entropy $e^{h}$ would be the effective number of codons per
amino acid.  However, we might want to have a number between 20 and 61 which
represents the total effective number of codons.  For that, we propose to
measure the compressibility $\gamma$ of the gene:


\begin{equation} \label{compressibility:eq} \gamma=  \frac{e^{h} -1}{e^{h_r}-1}
\end{equation}

where ${h_r}$ is the maximum entropy: $h_r = \sum_{A} p(A) \log{|C_A|}$, $|C_A|$
being the total number of synonymous codons of amino acid $A$.

$\gamma$ is a number between 0 and  1, and tells how biased the gene is,
overall. The compressibility can be simply rescaled to be between 20 and 61, 


\begin{equation} \label{compressibility:eq} K_w=  20 + \gamma  \times (61-20)
\end{equation}

It is interesting to observe that the two measures, $K_w$ and $K_{uw}$, can be
very close in some particular cases.  For instance, let us consider the
hypothetical case in which all amino acids have the same number of synonymous
codons, $|C|$.  In that case, $h_r = \log{|C|}$ and

\begin{equation} \gamma=  \frac{20\times  K_g -20}{20 \times |C|  -20}
\end{equation}

where $K_g$ is the geometric mean of $k_A$.

Replacing $61$ with $|C| \times 20$, we get

\begin{equation} K_{w}=  K_g \times 20 \end{equation}


which differs from $K_{uw}$ only in the sense that we considered the weighted
geometric mean of $k_A$ instead of the  unweighted arithmetic one.

The same approach can be extended to 1st, 2nd or nth order just replacing $h$
and $h_r$ with the average of $h$ over the states. Same thing for $k_A$ in the
unweighted measure (more details should go here \dots).



\begin{figure} \begin{center} \includegraphics[width=4in]{Figs/fig1.pdf}
\caption{Illustration of how information theory can be used to measure codon
bias. Number refers to the entire E. coli genome.} \label{fig1:illustrative}
\end{center} \end{figure}





\subsection{GC-corrected null model}
\label{Sec:GC-corrected}


% GENE, GENOME, SEQUENCE: CLARIFY
In defining the information bias, we compare the effective number of codons
with a null model where each codon is used with uniform probability. 
This is the simplest null model possible, but more refined choices can be adopted:
in particular, we are interested in a model which has the same GC-content, i.e. the same fraction 
of nucleotides which are either C or G. In fact, the amount of GC-content 
might play an important role in driving he codon bias:
a null model with the same GC-content will have a stronger codon bias that a null model 
where codons are used with equal probability. Comparing the codon bias of the actual genome
with a null model which accounts for the GC bias, will inform us about
the extent to which the GC bias explains the codon bias.


We now discuss how we define the codon probability in the GC-corrected null model.
These probabilities must be defined so that the GC-content of a gene sampled from the null model
would be the same as the GC-content of the real gene on average.
However, knowing the average GC-content does not inform us
about the whole probability distribution of the codons, in the same way as knowing
the average of a random variable does not tell us all about its probability distribution:
for that, we also need to know the standard deviation, 
and all the higher moments.

This kind of under-determined problem is well known in statistical physics:
for instance, we might be interested in knowing the velocity of the particles of a gas
given that we only know their average velocity, which is set by the temperature.
This problem has been solved in statistical physics observing that the velocity distribution
at thermal equilibrium will be the one with maximum entropy.
We are now going to translate the same concept in the language of codons.


For a given gene, there is a very large number of codon sequences which would
encode for the same amino-acids and would have the exact same fraction of GC, $\rho$.
Although impossible in practice, let us imagine that we can write a list which includes all and only the sequences
which have GC-content equal to $\rho$.
Out of this enormous list, let us imagine
we can pick one at random. What would be the codon probability for the amino-acid valine, for instance?
Valine can be encoded by four codons:  GTA, GTT, GTC or GTG.
The first two have a GC-content of $1/3$, while the second ones have $2/3$.
Now, let us assume that, the GC-content $\rho$ is very high.
Then, we would expect that genes which use
preferentially GTA or GTT would likely not have such a high
GC content and thus would not 
appear in the list mentioned above. Instead, $p(\textrm{GTG})$ and $p(\textrm{GTC})$ would be higher,
or even much higher for high $\rho$.

The algorithm described above
is a way to compute the codon probabilities, 
only knowing the total GC-content: we first write a
list (an ensemble) with all possible sequences which have
the required GC-content, and then we measure
the codon frequencies of those sequences.
This solves our under-determined problem:
the codon distribution will be whatever comes out of the ensemble
which satisfies the minimal requirement of having the correct GC-content $\rho$.

As already mentioned, such an algorithm is not practical because 
the ensemble would be way too large to be computed exactly. Of course, one could use
Monte Carlo simulations to get an approximate solution. However, we do not actually need to do that
because the probability distribution of the ensemble can be computed analytically.
In fact, it can be shown that the probability distribution of the ensemble is the one with maximum entropy, 
which, in our case is an exponential distribution, exactly as the energy distribution in statistical physics (provided
that we have a large enough number of codons).
%In order to be more concrete, let us focus on a simple example where a gene encodes for valine only: 
%each amino can be encoded either by Let us assume that GTC is used $40\%$ and the other codons $20\%$.
%This would correspond to $\rho= 53\% = (0.6 \times 1 + 0.5 \times 2 ) / 3$.
%The same GC-content would be achieved by a gene which uses, for instance,  GTG $= 40\%$ and the
%rest $20\%$. Another possibility would be to use both GTG and GTC $30$ and the rest $20$. 
%
%If we pick one of all this possibility uniformly at random, what would be the average probability 
%of using GTC? Sometimes this number would be $40\%$, sometimes $20\%$, but since both
% GTC and GTG would be used $60$ in total, GTC \textit{on average} would be $30\%$ for symmetry.
%
%Is it possible to compute this distribution for more complex cases where several amino-acids are present? The same
%problem has already been solved in statistical physics, for instance, when we look at the energy levels
%of a given physical systems given its total energy.  Here the energy is the amount of GC-content.
%For instance valine has two energy levels ${1, 2}$ each one with degeneracy ${2}$. 
%
We can write the probability distribution of using a codon $C_a$ with GC-content $E_c$ for amino-acid $a$ as:
\begin{equation}
p(C_a) =  e^{- \beta E_c} / Z_a
\end{equation}


where $E_c$ is the number of G or C nucleotides in codon $C$, $Z_a$ is the normalization factor which assures that:

\begin{equation}
\sum_{C_a} p(C_a) = 1
\end{equation}


and $\beta$ is such that:

\begin{equation}
\sum_a n_a \sum_{C_a} E_C p(C_a) = \textrm{total GC nucleotides}
\end{equation}

where $n_a$ is the total occurrences of amino-acids $a$.

$\beta = 0$ is the special case each codon is used with uniform probability, whereas negative values yield a model with more GC-content.




\subsection{Using priors for the codon distribution}
\label{sec:priors}

In computing the effective number of codons, we need to estimate the codon probability from the data.
The simplest approach is to estimate the probability of seeing a codon $p(C_a)$, as the counts of the codon divided by the total number of codons, $N$:

\begin{equation}
p(C_a) = \frac{C_a} {N}.
\end{equation}


Although very simple, the problem with the previous equation, is that very short genes might appear to have a very
strong codon bias simply because of the sparsity of the data: many codons are never used, they are assigned zero probability, and bias seems strong.
In the Bayesian approach of estimating probability, one should never assign a zero probability to any event. The simplest solution is to assign to define pseudo-counts for each codon. Here, we assume that we have had $n_a$ prior observations (one per amino-acid), where each codon was used with uniform probability:


\begin{equation}
p(C_a) = \frac{C_a + 1 / |C_a|} {N + |A| }
\end{equation}

where $|A| = 20$ is the total number of amino-acids. 


For the GC-corrected null model instead, we can assume that our prior information comes from the null model itself:

\begin{equation}
p(C_a) = \frac{C_a + p_{\mu}(C_a | A)} {N + |A| }
\end{equation}

where $p_{\mu}(C_a | A)$ is the probability of using codon $C_a$ given the amino-acid $A$ in the GC-corrected null model.
The only complications with the assumption above is that $p_{\mu}(C_a | A)$ depends on the $p(C_A)$ themselves:
however, the equation can be easily solved iteratively yielding a self-consistent solution.




\end{document}  


