%\subsection{Learning}

%An MLN we use to model the collective SRL task is presented in section \ref{sec:model}. We learn the weights associated this MLN using 1-best MIRA~\citep{crammer01ultraconservative} Online Learning method. 
%%Note that we only consider formulae that appear more than once in the training set.

%\subsection{Inference}

%%Assuming that we have an MLN with a set of suitable weights and a given sentence then we need to predict the choice of predicates, frame types, arguments and role labels with maximal a posteriori probability. To this end we apply a method that is both exact and efficient: Cutting Plane Inference~\citep[CPI,][]{riedel08improving} with Integer Linear Programming~(ILP) as base solver. We use it for inference at test time as well as during the MIRA online learning process.

Assuming that we have an MLN, a set of weights and a given sentence then we need to predict the choice of predicates, frame types, arguments and role labels with maximal \emph{a posteriori} probability~(MAP). To this end we apply a method that is both exact and efficient: Cutting Plane Inference~\cite[CPI,][]{riedel08improving} with Integer Linear Programming~(ILP) as \emph{base solver}. 

Instead of fully instantiating the Markov Network that a Markov Logic Network describes, CPI begins with a subset of factors/edges---in our case we use the factors that correspond to the local formulae of our model---and solves the MAP problem for this subset using the base solver. It then inspects the solution for ground formulae/features that are not yet included but could, if added, lead to a different solution---this process is usually referred to as \emph{separation}. The ground formulae that we have found are added and the network is solved again. This process is repeated until the network does not change anymore.

This type of algorithm could also be realised for an ILP formulation of SRL. However, it would require us to write a dedicated separation routine for each type of constraint we want to add. In Markov Logic, on the other hand, separation can be generically implemented as the search for variable bindings that render a weighted first order formulae true (if its weight is negative) or false (if its weight is positive). In practise this means that we can try new global formulae/constraints without any additional implementation overhead.   

We learn the weights associated with each MLN using 1-best MIRA~\citep{crammer01ultraconservative} Online Learning method. As MAP inference method that is applied in the inner loop of the online learner we apply CPI, again with ILP as base solver. 
%We will later show that with this set-up joint training of the complete SRL system (as opposed to the training of several local classifiers) is very efficient.  


