\documentclass[10pt,a4paper]{article}

%\linespread{1.3}

%\usepackage{setspace}
%\doublespacing
\renewcommand{\baselinestretch}{1.5} 

\usepackage[T1]{fontenc}
\usepackage{helvet}
\renewcommand*\familydefault{\sfdefault}
\usepackage[top=3cm, bottom=3cm, left=2.5cm, right=2.5cm]{geometry}
\usepackage{pgf}
\usepackage{wrapfig}
\usepackage{subfigure}
\usepackage{caption}
\hyphenation{know-ledge}

\title{A holistic approach to knowledge-based computer vision}
\date{\vspace*{-2cm}}

\begin{document}
\maketitle
\section{Introduction}

Computer vision systems offer a cheap and non-invasive way of monitoring objects, systems and their behaviour.  As the quality of  cameras and computer hardware increases (and their cost decreases), more and more companies are becoming interested in adding a computer vision component to their products.  One of the main bottlenecks facing such companies is the fact that developing a new computer vision system can be a labour-intensive task, requiring a lot of specific expertise. Indeed, often, it will require the development of new algorithms, and even if off-the-shelf machine learning (ML) algorithms can be used, significant effort is still needed to combine these algorithms and to construct the large data sets needed for their training phase.

These problems manifest themselves most clearly in systems whose goal is to interpret the actions that are observed in a piece of video footage.  Such systems not only need to detect and track objects, but also have to interpret their interactions and deduce the underlying state of the domain.  This makes interpreting actions in video footage one of the most challenging computer vision tasks.

Knowledge-based systems are a promising approach towards drastically simplifying the development of these systems. In the knowledge-based approach, the intelligence of a computer vision system does not have to be encoded in special-purpose algorithms or derived from huge sets of annotated training data.  Instead, a knowledge-based system applies general automated reasoning algorithms to make direct use of a domain expert's knowledge, as expressed in a high-level knowledge representation (KR) language.  In principle, these KR methods could make the development of a new computer vision system as easy as providing a detailed description of what the camera should be seeing.

This idea is not new.  Already in 1997, the well-known seminar centre Schloss Dagstuhl organized a workshop on ``Knowledge-Based Computer Vision''.  These early efforts were, however, not very successful.  In part, the explanation might be that there was simply not yet enough computation power available fifteen years ago, or that the basic robust computer vision feature extraction methods were not yet present.  However, we believe that a second, more important reason was the absence of a suitable theoretic framework which would allow a domain expert's qualitative, relational knowledge to be integrated into the quantitative setting in which both cameras and machine learning algorithms tend to operate.  As we argue in more detail below, this gap can be filled by the substantial developments that have been achieved in the emerging field of Probabilistic Logic Learning (PLL) over the last decade \cite{luc:book}.

Several researchers have realized the potential of these methods for vision computer.  In 2008, for instance, another Dagstuhl seminar was devoted to the topic of ``Logic and Probability for Scene Interpretation''.  The conclusions of this seminar are telling:
%TGO: misschien is het een beetje plaatsverspilling om dit citaat volledig te zetten? 
\begin{quote}
``However, unanswered questions and unsolved problems by far outnumbered definite answers to the questions with which the workshop has started. Here are some of these open questions: Are there a priori roles for logics and probabilities in scene interpretation? What is the proper semantics for probabilities in scene interpretation? What is the impact of high-level and common sense knowledge in scene interpretation? What are the consequences of logics and probabilities for a scene interpretation system architecture?''\footnote{\tt http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=08091}
\end{quote} 


\begin{wrapfigure}{R}{7cm}
\centering
\framebox{
\pgfimage[width=6cm]{strategic}}
\caption{\label{strategic}Strategic goals}
\end{wrapfigure}

Since 2008, however, PLL methods have matured further, both theoretically and in terms of available implementations.  This goal of this project is to again investigate the use of PLL for knowledge-based computer vision. However, unlike existing approaches (e.g., \cite{artikis}), we do not propose a piecemeal, bottom-up approach, in which small subproblems are tackled in isolation.  Instead, we propose a top-down framework, that tries to capture the entire workflow from camera input to desired output {\em at once}.  The main benefit of this approach is that it will allow the same expert knowledge to be used at different points in the workflow: for instance, knowledge about the appearance of objects can not only be used to detect these objects, but also to decide what actions they are currently involved in; and conversely, knowledge about the interactions between objects is not only useful for recognizing actions, but may also help to identify the individual objects that are participating in them.

The promotor and copromotor are uniquely placed to carry out this project: the promotor obtained his PhD on the topic of PLL in the research group DTAI (Declarative Languages and Artificial Intelligence) of the Dept.~of Computer Science, whereas the copromotor obtained his PhD on the topic of computer vision in the research groups VISICS (VISion in Industry, Communications, and Services) of the Dept.~of Electrical Engineering.  Together, they therefore have a thorough understanding of the different domains that are relevant for this project. 

Both the promotor and copromotor currently work in  the research group EAVISE (Embedded and Artificially intelligent VISion Engineering) of Lessius University College, where the copromotor in particular has built up a significant expertise in developing industrial computer vision applications. 
While this proposal describes fundamental research that is of scientific interest in its own right, it is motivated by a real need that is observed within this research group.  Traditionally, its industrial partners have mainly been companies for whom computer vision is part of their core business, and the role of EAVISE has been to translate fundamental research in computer vision to an industrial context.  Recently, however, there is also a significant demand from companies that are active in some other area, but want to enhance their existing products with an additional vision component.  Because these companies typically can commit fewer resources to computer vision and lack deep knowledge in it, a general method of quickly and easily developing a working prototype would be highly beneficial here. We believe that a knowledge-based computer vision system offers great potential for this purpose.  As illustrated in Figure \ref{strategic}, the future research activities of the EAVISE group will therefore focus on translating  fundamental scientific results of both the VISICS and DTAI groups to industrial computer vision practice. The strategic goal of the proposed project is to lay the theoretic groundwork for this, by developing a coherent semantic framework in which new results from computer vision (VISICS) and from knowledge representation or machine learning (DTAI) can be expressed and combined.

When this combination of knowledge representation and computer vision will be successfull, the impact on both fields will be enormous. In computer vision, one will be finally able to use human expert knowledge in image interpretation systems in a transparent way, avoiding the gigantic training data sets that are necessary at the moment. A huge number of new applications will become possible with this. On the other hand, the KR domain will break out of its present isolation situation: input data does not have to be limited anymore to human-entered text, now a camera---which is a very rich environment sensor---can be used as input. The combination of the two fields will be an important step towards real artificial intelligence, comparable to human intelligence.     

With this strategic goal in mind, the project proposes three main deliverables. Our first goal is to establish a sound semantic framework in which we can describe how different components (camera inputs, knowledge bases, machine learning algorithms) interact throughout the action interpretation workflow.  The second goal is combine existing state-of-the-art algorithms/systems to instantiate this semantic framework into prototypes that implement a number of concrete vision applications.  Finally, the third goal is to use the developed prototypes as a way of evaluating how well various existing methods are able to fulfill the different roles defined in the semantic framework.  In this way, gaps in the current state-of-the-art can be identified and future research goals can be established.

\section{System architecture}

The goal of this project is to investigate the possibility of building a holistic knowledge-based computer vision system, following the architecture shown in Figure \ref{system}.  This architecture consists of two layers that work together to turn video footage into an interpretation of what is actually being observed. 

\begin{wrapfigure}{L}{0.6\textwidth}
\framebox{
\pgfimage[width=0.55\textwidth]{system}
}
\caption{\label{system}Overall architecture of proposed system.}
\end{wrapfigure}


%This section will outline the main challenges that the proposed project aims to tackle. For each of these, we will also briefly discuss the state-on-the-art techniques in knowledge representation, machine learning, and computer vision that we will use to meet this challenge. Figure \ref{system} illustrates the overall architecture of the system that will result from these efforts and may be used as reference throughout.
%TGO: een beetje verwarrend hier. De figuur is ingewikkeld, en wordt nergens uitgelegd. Er wordt de indruk gewekt dat de uitleg later zal komen, maar dat is niet zo. Ik begrijp ook niet alles van de figuur... Maar het is zeker wel belangrijk om alles te vatten in een overzichtsfiguur.

\subsection{Parametrized object recognition}

The bottom layer of this system is a component to recognize objects.  To reduce the effort needed to implement detection of new kinds of objects, this layer should be somehow parametrized in the kind of objects that it should recognized.  ML methods such as, e.g., neural network learning are able to automatically construct accurate models of objects \cite{ViolaJones}.  
%Such an algorithm, or family of algorithms, will allow new kinds of objects to be detected by just changing the input parameters, instead of forcing the programmer to change or replace the existing algorithms. While it is of course a daunting task to build an algoritm to solve the entire task of object recognition for once and for all, the current state-of-the-art already offers some useful building blocks.
Moreover, the state-of-the-art literature already offers methods that can detect arbitrary objects \cite{BoW, Felz, GallLempitsky}. %, albeit after an extensive training phase for each specific object. %Indeed, since even humans find it difficult to recognize classes of objects unless they have  seen at least a few examples, this component obviously needs to incorporate ML algorithms. 
 One persistent problem, however, is the need for a large set of training data (e.g. 5000 face images plus 10000 non-face images in the case of \cite{ViolaJones}). This can be problematic since such training data consists of images that have to be manually annotated, a process which is very labor-intensive. 

The usual solution to this problem in ML is the use of background knowledge: it has been found that, by incorporating the knowledge of a domain expert as an underlying assumption into the ML process, the number of training examples can drastically be reduced. This same approach can be followed to reduce the number of training examples needed to construct parametrizable recognizers for new kinds of objects.  The background knowledge needed in this case would be knowledge that concerns the appearance of objects. While there are many aspects to the appearance of objects, the most promising approach is to focus on the structure of the object, i.e., the way in which it is composed of smaller components and how these relate to each other.  On the one hand, this kind of knowledge is invariant to, e.g., lighting conditions and viewpoint, while, on the other hand, it is also this kind of knowledge that is most likely to be shared by different members of the same class of objects. 

This fits with the recent trend in computer vision towards the use of component-based object detection algorithms, which also learn models in which an object is seen as a combination of different components. These components, sometimes called \emph{visual words}, are generated by a sequence of three image processing steps: feature detection, feature description and codebook generation.  While these component-based algorithms typically learn only rudimentary models \cite{GallLempitsky} (or none at all, such as in the popular 'bag-of-words' approach \cite{BoW}), they provide an obvious hook to which structural knowledge about objects can be attached. Recent research by the copromotor of this proposal has focused on one such component-based algorithm \cite{Felz}, and its potential for industrial applications.

Extending these ML algorithms with a knowledge base is not at trivial task, however, because of the difference in setting: KR methods are typically concerned with qualitative information about complex relations between concepts/objects, whereas typical ML algorithms use quantitative or probabilistic models with a simple propositional structure.

\subsection{Parametrized action interpretation} 

On top of the object detection layer, a second layer is needed to identify which actions or activities are happening, and how they are affecting the state of the domain. 
The current state-of-the-art in action recognition in video consists mainly of ML methods, that make use of quantitative or probabilistic models, such as Bayesian networks \cite{vezzani09}, neural networks \cite{mikolajczyk10}, Hough voting\cite{yao10},  support vector machines \cite{willems09} on spatio-temporal data \cite{wang09}.  Again, however, the need for large sets of training data can be prohibitive.  At the same time, reasoning about actions and their effects is one of the most studied subfields of Knowledge Representation.  As a result, there currently exists a large and well-understood family of {\em action languages} (see, e.g., \cite{actlang}) that are ideally suited for representing high-level knowledge about action and activities.   A goal of the proposed project is therefore to use background knowledge expressed in such languages to augment the ML methods and reduce the amount of training data needed.  This will again require us to combine qualitative KR methods with probabilistic ML algorithms, as well as with the probabilistic or fuzzy outputs of the object detection layer.

%Integrating such a layer action languages into the proposed computer vision system will pose two main challenges.  First, there is the question of how the action layer should handle the inputs provided by the object recognition layer.  Typically, object detection algorithms  will output probabilistic or fuzzy information, which will have to be integrated with qualitative, relational knowledge about the effects of actions. This will require another ML component, that can learn how to transform the outputs of the  various object detections into suitable input for the action detection layer.       %Here, we will be again confronted with a need for combining probabilistic and logical information.  While there exist some preliminary results on probabilistic action languages, these are not yet as developed as their non-probabilistic counterparts and therefore some research will still be required.

Unlike existing approaches such as \cite{brendel11}, we will add a feedback mechanism to the layered structure of the system, in which the action recognition layer is built upon the object detection layer.  This is needed because high-level knowledge about the dynamic evolution of a scene can be used to correct errors made by the lower-level object detection layer.  While such corrections can be made on a case-by-case basis, a more promising approach is to view each correction as a possible source of feedback to the object interpretation layer, such that, as more and more of its observations are corrected by the high-level layer, the object detection algorithm itself is gradually fine-tuned to performing in the specific context of the scene that is being watched.


\subsection{Main challenge: combining qualitative and quantitative information}

%Cameras produce quantitative information.  Methods for handling camera images are therefore typically also quantitative.  Many algorithms for object detection, for instance, are based on quantitative ML techniques, such as neural networks or support vector machines. Such algorithms commonly also output quantitative information, such as the probability of a certain object having been detected at a certain location.

As evident from the above discussion, a recurring theme in the architecture of the proposed system is the need to integrate the quantitive information that comes from cameras and ML algorithms with the qualitative knowledge of human domain experts. 
%The goal of the proposed framework is to allow a programmer to develop a new application without having to worry about these quantitative details.  Instead, only his qualitative, high-level understanding of the application domain will serve as an input to the system.  This means, however, that the framework itself should be able to combine the qualitative knowledge that it receives from the programmer with the quantitative input that it gets from the camera, or from various machine learning algorithms.  
To build a robust framework, in which well-defined components can easily be changed or replaced, it is crucial that this integration is not done in an {\em ad hoc} way, but that it rests on sound knowledge-theoretical and mathematical foundations. % Only in this way will we be able to build a robust framework, in which well-defined components can easily be changed or replaced.
These foundations can be found in the area of Probabilistic Logic Languages (PLL, also known as Statistic Relational Learning or SRL).  This relatively new research domain has gained significant momentum over the last decade as a method -- both theoretic and practical -- of combining relational representations (as in first-order logic) with quantitative methods (typically from ML). It draws inspiration from both propositional probabilistic methods such as Bayesian networks or Markov models, as well as first-order logical methods such as (Constraint) Logic Programming.  One of the main achievements in this area has been to produce a variety of probabilistic knowledge representation languages, that each have a formal semantics that is both logically and probability-theoretically sound.
Moreover, for several of these languages, inference systems are available that leverage state-of-the-art implementation techniques such as Binary Decision Diagrams (BDDs) \cite{kimmig11} or SLG-resolution \cite{riguzzi10} to handle real-life application, e.g., in bio-informatics \cite{deraedt07}.  Finally, several systems also offer ML algorithms to estimate the parameters (e.g., \cite{gutmann11}) or, less frequently, also the structure of theories in these languages \cite{kok09}.
As illustrated in Figure \ref{inputs}, PLL methods therefore offer a unifying semantic framework, that can combine the quantitative inputs from cameras and from typical ML methods, the annotated training images that typically serve as input to these ML algorithms, and the qualitative, relational expert knowledge about the kind of scene that is being observed.  Moreover, it also 
offers performant systems, that can be used to rapidly produce working prototypes.  

\begin{figure}
\begin{minipage}{0.5\textwidth}
\centering
\pgfimage[width=6cm]{inputs}
\renewcommand{\thefigure}{\arabic{figure}a}
\caption{\label{inputs}Challenges}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\pgfimage[width=6cm]{pll}
\addtocounter{figure}{-1}
\renewcommand{\thefigure}{\arabic{figure}b}
\caption{\label{pll}Proposed solution}
\end{minipage}
\end{figure}
%TGO: misschien biedt deze figuur ook niet echt een meerwaarde aan de tekst. Bij plaatsgebrek zou ik deze laten sneuvelen.

The promotor of this project proposal obtained his PhD in the area of PLL and has several publications on this topic.  One of the contributions of his work has been to develop the theory of a {\em causal} PLL language \cite{vennekens:jelia}, in which cause-effect information under uncertainty is studied in much the same way as Pearl has studied this in the context of Bayesian networks \cite{pearl:book}. Because knowledge about causality plays an important role in the way in which humans interpret video footage, this language is particularly suited for our application.  Moreover, inference algorithms for it have been implemented in the Problog system\footnote{{\tt http://dtai.cs.kuleuven.be/problog}}, as well as XSB prolog\footnote{{\tt xsb.sourceforge.net}}.


% \subsection{Summary}

% \begin{figure}
% \begin{minipage}{0.5\textwidth}
% \centering
% \pgfimage[width=6cm]{inputs}
% \renewcommand{\thefigure}{\arabic{figure}a}
% \caption{\label{inputs}Challenges}
% \end{minipage}
% \begin{minipage}{0.5\textwidth}
% \centering
% \pgfimage[width=6cm]{pll}
% \addtocounter{figure}{-1}
% \renewcommand{\thefigure}{\arabic{figure}b}
% \caption{\label{pll}Proposed solution}
% \end{minipage}
% \end{figure}

% As illustrated in Figure \ref{inputs}, the main challenges outlined in this section all concern the difficulties of combining different kinds of input information into a single framework:
% \begin{itemize}
% \item The quantitative inputs that can be obtained from cameras, as well as from typical Machine Learning methods;
% \item The annotated training images that typically serve as input to these Machine Learning methods;
% \item The qualitative, relational expert knowledge about the kind of scene that is being observed, and that we want to leverage to produce a system that is, on the one hand, general enough to be easily applicable to a variety of settings, and that, on the other hand, does not require a prohibitively large set of training data.
% \end{itemize}
% As argued above, this proposal aims to investigate the potential of Probabilistic Logic Learning methods as both a general semantic framework and a concrete implementation environment for solving these integration problems.  Figure \ref{pll} illustrates why we believe these methods to be a natural fit to these problems.



\section{Planning}



Meeting the challenges outlined in the previous sections is an ambitious research goal, which is unlikely to be fully completed within a two year time-frame.  To ensure that this project will nevertheless produce useful results, it is conceived as an iterative process in which an increasingly complex series of practical applications is tackled.  As illustrated in Figure \ref{planning-spiraal}, each of these iterations will contribute towards the three main deliverables of this project:
\begin{itemize}
\begin{minipage}{0.5\textwidth}
\item[D$1$] A detailed theoretic, semantic framework for holistic knowledge-based computer vision;
\item[D$2$] A prototype implementation of (part of) the proposed system, together with working prototype implementations of several concrete applications in this system;
\item[D$3$] A thorough evaluation of the prototype, determining which  state-of-the-art methods are best suited for the different roles defined by the theoretic framework, and identifying the most promising topics for future research. 
\end{minipage}
% \begin{wrapfigure}{R}{0.47\textwidth}
\begin{minipage}{0.5\textwidth}\centering
\framebox{
\pgfimage[width=0.8\textwidth]{planning-spiraal}\vspace{-0.4cm}
}
\captionof{figure}{Project structure.}\label{planning-spiraal}
\end{minipage}
%\end{wrapfigure}
\end{itemize}
%TGO: Mooie figuur! Moeten de letteraanduidingen D1, D2 en D3 niet tussen de assen staan in plaats van erop?

\begin{wrapfigure}{R}{0.6\textwidth}
\centering
\pgfimage[width=0.55\textwidth]{plan}
\caption{\label{plan}Project planning}
\end{wrapfigure}


Our first application will be to interpret video footage of a board game being played.  This application is a good starting point, because it offers, on the one hand, a well-defined domain governed by unambiguous and fully known rules, while, on the other hand, also allowing for the use of easily detectable objects in well-lit and orderly circumstances.  It is therefore our goal to achieve a fully functional prototype for this application.

The second, more complex application will be that of interpreting sports footage.  Here, the behaviour of players is less well-defined, there is more room for ambiguity, and the basic object detection can be more difficult, due to an increased likelihood of poor lighting, occlusions, articulated objects and generally more chaotic circumstances.  Therefore, this application will provide a good opportunity to test and extend the limits of the board game prototype.  Moreover, this application has also been studied in the literature, which will allow for a meaningful comparison between our methods and the existing state-of-the-art \cite{brendel11}.  Because this problem is known to be quite hard, we expect that our prototype will only be able to offer a partial solution.


The final application will be crowd monitoring. This is a task with high industrial relevance: surveillance cameras are becoming ever more present on our streets and in, e.g., public transport.  Currently, the huge quantities of footage that are generated have to be actively monitored by a human operator (or are only used after the fact).  Crowd monitoring systems offer the potential of automatically detecting abnormal situations in real-time, which would greatly increase the effectiveness of these camera systems.  From a technical point-of-view, this application is  essentially a generalization of the task of interpreting sports footage, where, again, the rules governing  the behaviour of the monitored subjects becomes far less well-defined and the quality of the footage  becomes significantly worse.  The main role of this application in the project will be to pinpoint those areas in which our prototype system and the current state-of-the-art still fall short of what is needed to develop industrially relevant systems.  As such, it will serve to determine the future research goals of the EAVISE research group.

% \section{Potential impact}

% While this proposal describes fundamental research that is of scientific interest in its own right, it is motivated by a real need that is observed within the EAVISE research group of Lessius University College.  This research group collaborates with industrial partners to develop new, applied and possibly embedded vision system.  Traditionally, its partners have mainly been companies for whom computer vision is part of their core business, and the role of this research group has mainly been to translate fundamental research in computer vision (in collaboration with the VISICS group of ESAT, KU Leuven) to an industrial context.  Recently, however, there is also a significant demand from companies that are active in some other area, but want to enhance their existing products with an additional vision component.  These companies typically have fewer resources to invest in computer vision, but their applications may also be less demanding.  Therefore, a general method of quickly and easily developing a working prototype would be highly beneficial here.

% As argued above, we believe that the integration of Knowledge Representation techniques, based on recent advances in PLL, is a promising approach to achieving this goal.  The strategic goal of the proposed research is to develop a general and coherent framework which will allow future fundamental research in the two separate fields of computer vision and artificial intelligence to be easily combined and translated to an industrial context, with a focus of also enabling computer vision for companies that have little or no prior experience in this area.  While this is an ambitious research project, it is planned in such a way that it is nevertheless guaranteed to produce useful output: the proposed  PLL framework of computer vision will be of theoretic interest, the developed prototypes will have the potential to be further developed into industrial applications, and the identification of existing bottlenecks will help to set the future research goals of our group.

%\bibliographystyle{plain}
%\bibliography{crea,/Users/joost/NewWork/newbib}

\begin{thebibliography}{10}
\setlength{\parskip}{-5pt}

\bibitem{actlang}
Chitta Baral.
\newblock {\em Knowledge Representation, Reasoning, and Declarative Problem
  Solving}.
\newblock Cambridge University Press, 2003.

\bibitem{luc:book}
L.~{De Raedt}, P.~Frasconi, K.~Kersting, and S.H. Muggleton.
\newblock {\em Probabilistic Inductive Logic Programming}, volume 4911 of {\em
  Lecture Notes in Computer Science}.
\newblock Springer, 2008.

\bibitem{Felz}
P.~Felzenszwalb, R.~Girshick, and D.~McAllester.
\newblock Cascade object detection with deformable part models.
\newblock In {\em {IEEE} Conference on Computer Vision and Pattern Recognition
  (CVPR)}, 2010.

\bibitem{willems09}
T.~Tuytelaars G.~Willems, J.H.~Becker and L.~Van Gool.
\newblock Exemplar-based action recognition in video.
\newblock In {\em BMVC}, 2009.

\bibitem{GallLempitsky}
J.~Gall and V.~Lempitsky.
\newblock Class-specific hough forests for object detection.
\newblock In {\em {IEEE} Conference on Computer Vision and Pattern Recognition
  (CVPR)}, 2009.

\bibitem{gutmann11}
B.~Gutmann, I.~Thon, and L.~De Raedt.
\newblock Learning the parameters of probabilistic logic programs from
  interpretations.
\newblock In {\em Proc.~ECML and PKDD}, volume 6911 of
  {\em LNCS}, pages 581--596, 2011.

\bibitem{kimmig11}
A.~Kimmig, B.~Demoen, L.~{De Raedt}, {V. Santos Costa}, and R.~Rocha.
\newblock On the implementation of the probabilistic logic programming language
  problog.
\newblock {\em TPLP}, 11:235--262, 2011.

\bibitem{kok09}
Stanley Kok and Pedro Domingos.
\newblock Learning markov logic network structure via hypergraph lifting.
\newblock In {\em Proc.~26th International Conference on
  Machine Learning (ICML)}, pages 505--512, 2009.

\bibitem{mikolajczyk10}
K.~Mikolajczyk and H.~Uemura.
\newblock Action recognition with motion-appearance vocabulary forest.
\newblock In {\em CVPR}, pages 1--8, 2008.

\bibitem{pearl:book}
J.~Pearl.
\newblock {\em Causality: Models, Reasoning, and Inference}.
\newblock Cambridge University Press, 2000.

\bibitem{deraedt07}
Luc~De Raedt, Angelika Kimmig, and Hannu Toivonen.
\newblock {P}rob{L}og: A probabilistic {P}rolog and its application in link
  discovery.
\newblock In {\em {IJCAI}}, pages 2462--2467, 2007.

\bibitem{riguzzi10}
Fabrizio Riguzzi.
\newblock {SLGAD} resolution for inference on {Logic Programs with Annotated
  Disjunctions}.
\newblock {\em Fundamenta Informaticae}, 102(3-4):429--466, 2010.

\bibitem{BoW}
J.~Sivic, B.~Russell, A.~Efros, A.~Zisserman, and W.~Freeman.
\newblock Discovering object categories in image collections.
\newblock In {\em Proc. Int'l Conf. Computer Vision, Beijing}, 2005.

\bibitem{vennekens:jelia}
Joost Vennekens, Marc Denecker, and Maurice Bruynooghe.
\newblock Embracing events in causal modelling: {I}nterventions and
  counterfactuals in {CP}-logic.
\newblock In {\em JELIA}, pages 313--325, 2010.

\bibitem{vezzani09}
R.~Vezzani, M.~Piccardi, and R.~Cucchiara.
\newblock An effcient bayesian framework for online action recognition.
\newblock In {\em Proceedings of ICIP}, 2009.

\bibitem{ViolaJones}
P.~Viola and M.~Jones.
\newblock Rapid object detection using a boosted cascade of simple features.
\newblock In {\em {IEEE} Conference on Computer Vision and Pattern
  Recognition}, 2001.

\bibitem{wang09}
H.~Wang, M.~M. Ullah, A.~Klser, I.~Laptev, and C.~Schmid.
\newblock Evaluation of local spatio-temporal features for action recognition.
\newblock In {\em Proceedings of BMVC}, 2009.

\bibitem{yao10}
A.~Yao, J.~Gall, and L.V. Gool.
\newblock A hough transform-based voting framework for action recognition.
\newblock In {\em Proceedings of CVPR}, 2010.

\end{thebibliography}

\end{document}
