\documentclass[12pt,oneside]{amsart}
\usepackage{hyperref}
\begin{document}
\title{\bfseries behave.R (r123)}
\author{Jason Pol\'ak}
\date{\today}
\maketitle

\section{Introduction}

In the mouse open field experiment, a number of mice are placed in a box and allowed to move freely for a set time. To record the desired data, a video camera can be placed directly above the mice, and their movements can be recorded. The recording can be analysed by a software package such as Noldus' Ethovision.

The data recorded by this particular software is the position of the mouse, and the surface area.

Ethovision contains some basic data analysis features such as calculating the angular velocity and the number of rearings by the animal. However, in order to calculate the statistics, the software must be used for each file, which can be time consuming. Additionally, recalculating a statistic with different parameters is likely to be painful.

Batch processing all the collected data files is faster and more elegant. Consider the hypothetical situation of analysing a hundred files, one file for each mouse. If it takes just three minutes to configure the program to output the desired results (for instance rearing), then the additional time is 300 minutes or five hours of boredom.

Now consider rearing. The algorithm basically takes two paramters: minimum relative surface area change and samples to use. Suppose a colleague did the same experiment but used different parameters, and you wanted to compare your results. Using Ethovision, this would mean opening every single experiment file and changing the parameters. Even if this only takes six minutes per file, this is an additonal ten hours.

However, Ethovision can output text files in a CSV format of the data. The natural course of action is to batch process all files at once, taking mere minutes for the same amount of files that would take hours. Not only, but no human intervention would be required.

Batch processing of {\bfseries any} data is very easy with the open source R statistical environment. It can be downloaded at \verb{http://www.r-project.org/}. R is an implementation of the S programming language and is free to use and redistribute to anyone without an end user license.

R is a statistical package and a programming language. Thus it can be used to to statistical calculations on data and combined with powerful programming to make data analysis easier.

With the data files outputted by a program like Noldus' Ethovision and R, the desired statistics such as rearing and total distance traveled can be obtained quickly.

\section{The Script}

R is an environment that can be used interatively at the command line\footnote{Several GUI interfaces for R are available for those who dislike the command line.}. Thus a quick analysis is usually done by entering a series of commands for reading and processing the data.

Alternatively, a single script can be written and run by "sourcing" it with R. The environment reads the script and runs the command contained within it.

Batch processing of the data files is taken care of by a script. Each script needs to be tailored for the relevant experiment. Consider the open field data file typically outputted by Noldus:

\begin{verbatim}
Status,Acquired,
Trial duration,0000:00:30:00.000,dd:HH:mm:ss
Tracking profile,Tracking profile,
Arena,bottomright,
Track file,track_00001.trk,
Object,0,
Arena profile,Arena profile,
Start date & time,2007-05-25 19:27:55.281,yyyy-MM-dd HH:mm:ss
Stop date & time,2007-05-25 19:57:55.281,yyyy-MM-dd HH:mm:ss
Trial number,1,
Mobility low,0,
Mobility high,0,
Movie file,OpenfieldTrial4Apr19_animal20_1_4_14.mpv,
Maximum voltage x,0,
Maximum voltage y,0,
Mobility running av,1,
Sample no.,Time,X,Y,Area,ZONES
1,0,5,10,100,-
2,0.2,7,5,105,-
3,0.4,10,5,95,-
4,0.6,7,2,20,-
5,0.8,7,4,80,-
6,1,6,4,96,-
7,1.2,3,3,108,-
8,1.4,0,0,40,-
9,1.6,0,5,90,-
10,1.8,4,5,100,-
\end{verbatim}

A bit of a mess but it can be handled. With R we can read the data by using a command such as:

\begin{verbatim}
mydata <- read.table("data.txt",sep=",", skip="17")
\end{verbatim}

The \verb{skip="17"} tells R to skip the first seventeen lines of metadata since those will not be used in the calculations. Now, R is able to read all files in a directory with the command:

\begin{verbatim}
flist <- dir()
\end{verbatim}

Thus we can combine the two in a \verb{for} loop to process all files in a directory. The heart of the script will be a function called \verb{readall}:
\end{document}
