Markov Logic~\citep[ML,][]{richardson06mln} is a Statistical Relational Learning 
language based on First Order Logic and Markov Networks. It can be seen as a 
formalism that extends First Order Logic to allow formulae that can be violated 
with some penalty. From an alternative point of view, it is an expressive 
template language that uses First Order Logic formulae to instantiate
Markov Networks of repetitive structure. 

In the ML framework,  we model the SRL task by first introducing a set of 
logical predicates\footnote{In the cases were is not obvious whether we refer 
to SRL or ML predicates we add the prefix SRL or ML, respectively.} such as 
\emph{word(Token,Ortho)} or \emph{role(Token,Token,Role)}. In the case of 
\emph{word/2} the predicate represents a word of a sentence, the type 
\emph{Token} identifies the position of the word and the type \emph{Ortho} its 
orthography. In the case of \emph{role/3}, the predicate represents a semantic 
role. The first token identifies the position of the predicate, the second the 
syntactic head of the argument and finally the type Role signals the semantic 
role label. We will refer to predicates such as \emph{word/2} as \emph{observed} 
because they are known in advance. In contrast, \emph{role/3} is \emph{hidden} 
because we need to infer it at test time.

With the ML predicates we specify a set of weighted first order formulae that 
define a distribution over sets of ground atoms of these predicates (or 
so-called \emph{possible worlds}). A set of weighted formulae is called a 
\emph{Markov Logic Network}~(MLN). Formally speaking, an MLN $M$ is a set of 
pairs $\left(\phi,w\right)$ where $\phi$ is a first order formula and $w$ a real 
weight. $M$ assigns the probability
\begin{equation}
\prob\left(\y\right)=\frac{1}{Z}\exp\left(
\sum_{\left(\phi,w\right)\in M} w
\sum_{\boldc\in C^{\phi}}f_{\boldc}^{\phi}\left(\y\right)
\right)
\label{eq:prob}
\end{equation}
to the possible world $\y$.  Here $C^{\phi}$ is the set of all possible
bindings of the free variables in $\phi$ with the constants of our
domain. $f_{\boldc}^{\phi}$ is a feature function that returns 1
if in the possible world $\y$ the \emph{ground formula} we get by
replacing the free variables in $\phi$ by the constants in $\boldc$
is true and 0 otherwise. $Z$ is a normalisation constant. Note that
this distribution corresponds to a Markov Network (the so-called \emph{Ground
Markov Network}) where nodes represent ground atoms and factors represent
ground formulae.

In this work we use 1-best 
MIRA~\citep{crammer01ultraconservative} Online Learning in order to train the weights of an MLN.  To find the SRL assignment with maximal \emph{a posteriori} probability according to an MLN and observed sentence, we use Cutting Plane 
Inference~\citep[CPI,][]{riedel08improving} with ILP base solver. This method is 
used during both test time and the MIRA online learning process.

