\documentclass{article}



\usepackage{arxiv}

\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc}    % use 8-bit T1 fonts
\usepackage{hyperref}       % hyperlinks
\usepackage{url}            % simple URL typesetting
\usepackage{booktabs}       % professional-quality tables
\usepackage{amsmath}        % displaying equations
\usepackage{amsfonts}       % blackboard math symbols
\usepackage{amsthm}         % theorem setup
\usepackage{nicefrac}       % compact symbols for 1/2, etc.
\usepackage{microtype}      % microtypography
\usepackage{lipsum}         % Can be removed after putting your text content
\usepackage{graphicx}
\usepackage{natbib}
\usepackage{doi}

\usepackage[table]{xcolor}
\newtheorem{problem}{Problem}

\title{A template for the \emph{arxiv} style}

%\date{September 9, 1985}	% Here you can change the date presented in the paper title
%\date{} 					% Or removing it

\author{ \href{https://orcid.org/0000-0000-0000-0000}{\includegraphics[scale=0.06]{orcid.pdf}\hspace{1mm}David S.~Hippocampus}\thanks{Use footnote for providing further
		information about author (webpage, alternative
		address)---\emph{not} for acknowledging funding agencies.} \\
	Department of Computer Science\\
	Cranberry-Lemon University\\
	Pittsburgh, PA 15213 \\
	\texttt{hippo@cs.cranberry-lemon.edu} \\
	%% examples of more authors
	\And
	\href{https://orcid.org/0000-0000-0000-0000}{\includegraphics[scale=0.06]{orcid.pdf}\hspace{1mm}Elias D.~Striatum} \\
	Department of Electrical Engineering\\
	Mount-Sheikh University\\
	Santa Narimana, Levand \\
	\texttt{stariate@ee.mount-sheikh.edu} \\
	%% \AND
	%% Coauthor \\
	%% Affiliation \\
	%% Address \\
	%% \texttt{email} \\
	%% \And
	%% Coauthor \\
	%% Affiliation \\
	%% Address \\
	%% \texttt{email} \\
	%% \And
	%% Coauthor \\
	%% Affiliation \\
	%% Address \\
	%% \texttt{email} \\
}

% Uncomment to remove the date
%\date{}

% Uncomment to override  the `A preprint' in the header
%\renewcommand{\headeright}{Technical Report}
%\renewcommand{\undertitle}{Technical Report}
\renewcommand{\shorttitle}{\textit{arXiv} Template}

%%% Add PDF metadata to help others organize their library
%%% Once the PDF is generated, you can check the metadata with
%%% $ pdfinfo template.pdf
\hypersetup{
pdftitle={A template for the arxiv style},
pdfsubject={q-bio.NC, q-bio.QM},
pdfauthor={David S.~Hippocampus, Elias D.~Striatum},
pdfkeywords={First keyword, Second keyword, More},
}

\begin{document}
\maketitle

\begin{abstract}
	\lipsum[1]
\end{abstract}


% keywords can be removed
\keywords{First keyword \and Second keyword \and More}


%\chapter{Bayesian Data Analysis}

\section{Bayesian Hypothesis Testing} 

\subsection{Conditional probability}

Conditional probability is the probability of an event occurring given that another event has already occurred.
It quantifies how the occurrence of one event affects the likelihood of another event.
The \emph{conditional probability} of event $H$ given event $E$ is denoted as $P(H|E)$ and defined by the formula
\begin{equation*}
P(H|E) = \frac{P(H\cap E)}{P(E)},
\end{equation*}
where $P(E)>0$ is the probability that event $E$ occurs and $P(H\cap E)$ is the probability that both events $H$ and $E$ occur.
This formula helps to update the probability of the event $H$ based on new information of $E$.

\subsection{Bayes’ theorem}

Bayes' theorem calculates the probability of an event by incorporating prior knowledge of conditions related to that event.
It is applied when we know the conditional probability $P(H|E)$ but we want to determine $P(E|H)$. 
By the definition of conditional probability indicating
\begin{equation*}
P(H\cap E)={P(E)}P(H|E)={P(H)}P(E|H),
\end{equation*}
\emph{Bayes' theorem} relates $P(H|E)$ and $P(E|H)$ via the following formula
\begin{equation}\label{eq:bayes-theorem}
P(H|E)=\frac{P(H) P(E|H)}{P(E)},
\end{equation}
where the prior knowledge is represented by the \emph{prior probability} $P(H)$, the \emph{marginal likelihood} $P(E)$ corresponds to new information that was not used in computing the prior probability, the \emph{posterior probability} $P(H|E)$ is the probability of $H$ after taking into account $E$, and $P(E|H)$ is the \emph{likelihood} of $H$ given $E$ (often known from data or experiments).
Bayes' theorem is widely used in Bayesian statistics and various applications such as parameter estimation, model comparison and selection (Sec.~\ref{sec:bayes-model-comparison}), and machine learning algorithms.

\subsection{Bayesian inference}

Bayesian inference is a particular approach to statistical inference in which Bayes' theorem is applied as a powerful tool, enabling the revision of our prior beliefs about a hypothesis in light of prior knowledge and new observed data (called \emph{evidence}).
This process combines prior probability with the likelihood of observed data under each hypothesis to produce a posterior probability.

Corresponding to Bayes' theorem \eqref{eq:bayes-theorem}, $H$ represents any hypothesis whose probability can be influenced by new evidence.
The hypothesis can be specified for a particular model explaining the relationship between variables (Sec.~\ref{sec:bayes-model-comparison}), or for the value of a particular parameter within a given model.
The prior distribution $P(H)$, which can be subjective or based on previous knowledge or empirical data, serves as a starting point for the Bayesian inference process.
$P(H)$ represents our initial belief about the probability of the hypothesis $H$ before considering the evidence $E$.
The total probability of the evidence $P(E)$ normalizes the posterior probabilities under all hypotheses.
$P(E|H)$ quantifies how probable a particular set of data $E$ is, under the assumption that the hypothesis $H$ holds.
These probabilities are evaluated and incorporated in Bayes' theorem to update the posterior distribution $P(H|E)$, which represents the belief about the hypothesis $H$ after considering the evidence $E$.
When using Bayesian inference to estimate parameter values, the resulting posterior probability density distribution offers researchers much more information than the mere point estimates provided by Frequentist methods.

The choice of prior knowledge can have a significant impact on the results of Bayesian inference. Different prior distributions may lead to different posterior distributions, particularly when there is limited or conflicting evidence. Selecting appropriate prior knowledge often involves a balance between incorporating existing information and allowing the observed data to influence the results appropriately.


\subsection{Bayes factors}\label{sec:bayes-factor}

In probability theory, the \emph{odds} of an event $H$ are expressed as
\begin{equation*}
\text{odds}(H) = \frac{P(H)}{1-P(H)} = \frac{P(H)}{P(H^c)},
\end{equation*}
where $H^c$ is the opposite event of $H$.
For two competing hypotheses $H_0$ and $H_1$, the \emph{prior odds} of $H_0$ are $P(H_0)/P(H_1)$ and the \emph{posterior odds} of $H_0$  given evidence $E$ are $P(H_0|E)/P(H_1|E)$.
The \emph{Bayes factor (BF)} for hypothesis $H_0$ given evidence $E$ is defined to be the ratio of the posterior odds to the prior odds 
\begin{equation*}
BF_{0,1}=\frac{\text { posterior odds }}{\text { prior odds }}=\frac{P(H_0 | E) / P(H_1 | E)}{P(H_0) / P(H_1)}.
\end{equation*}
Apply Bayes' theorem to $P(H_0 | E)$ and $P(H_1 | E)$, we have
\begin{equation*}
\frac{P(H_0 | E)}{P(H_1 | E)} =\frac{P(E | H_0) P(H_0) / P(E)}{P(E | H_1) P(H_1) / P(E)} =\frac{P(H_0)}{P(H_1)} \times \frac{P(E | H_0)}{P(E | H_1)}.
\end{equation*}
Therefore, the Bayes factor can be expressed as the ratio of likelihood
\begin{equation*}
BF_{0,1}=\frac{P(E | H_0)}{P(E | H_1)},
\end{equation*}
which means that the Bayes factor can be calculated without the need to first determine posterior probabilities or odds.

Bayes factors offer a powerful and flexible tool for hypothesis testing within the Bayesian framework. It provides a quantitative measure of how much more (or less) likely the observed data are under one hypothesis compared to the other.
$BF_{0,1}>1$ means evidence favors the hypothesis $H_0$, while $BF_{0,1}<1$ means evidence favors $H_1$.
Harold Jeffreys \cite{jeffreysTheoryProbability1961} gave a scale for interpreting the strength of evidence provided by Bayes factors in hypothesis testing.
As reproduced in Table~\ref{tab:Jeffreys-scale}, it categorizes the Bayes factor values into ranges that correspond to different grades of evidence against the hypothesis $H_0$.

\begin{table}[htp]
\caption{Jeffreys' grade of evidence \cite{jeffreysTheoryProbability1961}}\label{tab:Jeffreys-scale}
\begin{tabular}{|l|l|l|}
\hline \rowcolor{gray!30}
Grade & Range of $\log_{10} BF_{0,1}$ & Strength of evidence against $H_0$ \\ \hline
0     & $>0$                          & Evidence supports $H_0$            \\ \hline
1     & $-1/2$ to $0$                 & Barely worth mentioning            \\ \hline
2     & $-1$   to $-1/2$              & Substantial                        \\ \hline
3     & $-3/2$ to $-1$                & Strong                             \\ \hline
4     & $-2$   to $-3/2$              & Very strong                        \\ \hline
5     & $<-2$                         & Decisive                           \\ \hline                        
\end{tabular}
\end{table}

\begin{problem}[Evidence against an association between GRB and SNe]\label{prob:bayes-hypo-test}

\cite{grazianiEvidenceAssociationGammaray1999}

$H_0$: The association between SNe and GRBs is real.
    
$H_1$: There is no association between SNe and GRBs.
    
We set the prior probabilities $P(H_0) = P(H_1) = 1/2$.
\end{problem}

\subsection{Bayesian parameter estimation}\label{sec:bayes-parameter-estimation}

\begin{problem}[Fitting synthetic spectra within the Bayesian inference framework]\label{prob:bayes-parameter-estimation}

\cite{acunerNondissipativePhotospheresGRBs2019}

\end{problem}


\subsection{Bayesian model comparison}\label{sec:bayes-model-comparison}
Model Comparison: Bayes factors can compare multiple models.

\begin{problem}[Model comparison between photospheric and synchrotron emission models based on Bayesian evidence]\label{prob:bayes-model-comparison}

\cite{acunerFractionGammaRayBursts2020}

\end{problem}


\subsection*{Chapter 1}

\subsubsection*{Answer to Problem \ref{prob:bayes-hypo-test}}

\subsubsection*{Answer to Problem \ref{prob:bayes-parameter-estimation}}

\subsubsection*{Answer to Problem \ref{prob:bayes-model-comparison}}


\bibliographystyle{unsrtnat}
\bibliography{references}  %%% Uncomment this line and comment out the ``thebibliography'' section below to use the external .bib file (using bibtex) .


%%% Uncomment this section and comment out the \bibliography{references} line above to use inline references.
% \begin{thebibliography}{1}

% 	\bibitem{kour2014real}
% 	George Kour and Raid Saabne.
% 	\newblock Real-time segmentation of on-line handwritten arabic script.
% 	\newblock In {\em Frontiers in Handwriting Recognition (ICFHR), 2014 14th
% 			International Conference on}, pages 417--422. IEEE, 2014.

% 	\bibitem{kour2014fast}
% 	George Kour and Raid Saabne.
% 	\newblock Fast classification of handwritten on-line arabic characters.
% 	\newblock In {\em Soft Computing and Pattern Recognition (SoCPaR), 2014 6th
% 			International Conference of}, pages 312--318. IEEE, 2014.

% 	\bibitem{hadash2018estimate}
% 	Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, and Alon
% 	Jacovi.
% 	\newblock Estimate and replace: A novel approach to integrating deep neural
% 	networks with existing applications.
% 	\newblock {\em arXiv preprint arXiv:1804.09028}, 2018.

% \end{thebibliography}


\end{document}
