We produced BioVALEX using an updated version of the tools in 
% For the second system we used an updated version of 
\cite{preiss:07}, which we will refer to as the Cambridge system, or Cambridge tools.\footnote{The updating consisted of a more recent unpublished version of the SCF classifier, which re-implemented the original classifier rules in a different programming language.} In this system an input corpus is first parsed with 
RASP
% the RASP system 
\cite{briscoe:06}. A classifier consisting of manually-defined rules 
% , then a classifier 
then matches the RASP output to the SCFs in the Cambridge inventory (see Section~\ref{gold}), and 
% finally 
the resulting lexicon is filtered. The intention here was to evaluate how well an SCF system designed for general language and consisting only of general-language tools 
% SCF system 
could perform against a biomedical SCF gold standard 
% biomedical data 
when applied to a large biomedical corpus.
% Since the system was designed for general language and the
% original corpus was general language, we had to retrain it. For this,
% we used OpenPMC.

% \subsubsubsection{Data set and preprocessing}

For our corpus we used the PubMed Open Access Subset (PMC OA), which
is the largest publicly available corpus of full-text articles in the
biomedical domain \cite{PMC:09}.  PMC OA comprises 169,338 articles
drawn from 1,233 medical journals indexed by the Medline citation
database, totalling approximately 400 million words.  Articles are
formatted according to a standard XML tag set \cite{PMC:XML:09}. The
National Institute of Health (NIH) maintains a one-to-many mapping
from journals to 122 subdomains of biomedicine \cite{PMC:Subjects:09}.
The mapping covers about a third of the PMC OA journals, but these
account for over 70\% of the total data by word count.  Journals are
assigned up to five subdomains, with the majority assigned one (69\%)
or two (26\%). We used the same dataset as \cite{Lippincott:2011},
composed of journals that are assigned a single subdomain, and
discarding subdomains with less than one million words of data. The
resulting dataset contains a total of 342 journals in 37 biomedical
subdomains, with Genetics and Medical Informatics being the largest,
and Complementary Therapies and Ethics the
smallest. See \cite{Lippincott:2011} (Figure 4) for the distribution
of PMC OA data by subdomain.
% Figure~\ref{pmc_data} shows the distribution of PMC data by subdomain, with Genetics and Medical Informatics being the most frequent, and Complementary Therapies and Ethics the least.  We used a data set composed of journals that are assigned a single subdomain,
% .  To ensure sufficient data for comparing a variety of linguistic features, we discarded the 
 It has been shown that the open access collection is representative of the broader biomedical literature \citep{Verspoor:EtAl:09}.

% \begin{figure}[h]
%   \includegraphics[height=.5\textheight]{figures/subject_data.png}
% %   \caption{Distribution of PMC data by subdomain: green data is assigned solely to that subdomain, while yellow and red are assigned to one or two others, respectively.  We only use data assigned a single subdomain.}
%   \label{pmc_data}
% \end{figure}

% Each sentence was 
% parsed with the RASP (Robust Accurate Statistical Parsing) system
% \citep{briscoe:06}.
RASP is a modular statistical parsing system which
includes a tokenizer, tagger, lemmatizer, and a wide-coverage
unification-based tag-sequence parser. We used the standard scripts
supplied with RASP to output the set of grammatical relations (GRs)
for the most probable analysis returned by the parser or, in the case
of parse failures, the GRs for the most likely sequence of
subanalyses. 
% The dependency relationships embodied by the GRs
% correspond closely to the head-complement structure which SCF
% acquisition attempts to recover.
% , which makes GRs ideal input to the
% SCF system. 
% Crucially, 
In contrast to Enju, RASP is an unlexicalized parser, meaning that
it does not have access to a lexicon of information about the behavior
of specific words (as opposed to classes of words, e.g.~words with particular part-of-speech tags), and thus does not
already embody a notion of subcategorization.\footnote{We
performed an experiment using output of the unlexicalized Stanford
parser \citep{klein:03} as input to the subcategorization steps and found that accuracy was the same on the SCF evaluation as for RASP.}
% , making it ideal for
% unbiased acquisition of SCFs directly from the data.
% \subsubsubsection{Hypothesis Generation}
In the Cambridge system, a rule-based classifier incrementally matches GRs with the
corresponding SCFs.
% The rules were manually developed by examining a
% set of development sentences to determine which GRs were actually
% produced by the parser for each SCF. 
% The classifier identifies 163
% verbal SCFs. The SCF inventory was obtained by manually merging the
% frames exemplified in the COMLEX Syntax \citep{grishman:94} and ANLT
% \citep{boguraev:87} dictionaries and including additional frames found
% by manual inspection of the data. 
The rule set was an updated version of that used in \citep{preiss:07}; note that it was developed for general language and not adapted for biomedical text.
% Although the SCF inventory is the
% same as that in \citep{preiss:07}, the rule set was a re-development
% of that used in previous work. Note that the SCF inventory and rules
% were developed for general language and not adapted for biomedical
% text.
% , since our intention was to evaluate how well a general-language
% SCF system could perform on biomedical data when trained with a large
% biomedical corpus.
From the classifier output, preliminary lexical entries are
constructed for each verb, containing the raw and relative frequencies
of SCFs found for each verb in the data. Finally, the entries are
filtered to obtain a more accurate lexicon. 

We used two filtering methods.  The first method was simple relative
frequency filtering, as in the BioLexicon.  Here, an empirically
determined threshold is set on the relative frequencies of SCFs,
filtering out SCFs whose relative frequency for a given verb is lower
than the threshold. This simple method has been shown to have more
accurate results than more complex statistical hypothesis tests
\citep{korhonen:02}. 
Previous work on SCF acquisition for general language using similar
SCF systems found a threshold of 0.02 to give the most accurate
results. In development experiments we found 0.02 and 0.03 to give
the most accurate results under different conditions, and we chose to
use a threshold of 0.03 to match the threshold used by the Biolexicon.
% , since we compare the output of the Cambridge system to the Biolexicon.

Second, we used a novel method which we call SCF-specific
filtering. The intuition behind this method is that the appropriate
reliability threshold for each SCF may be different, since some SCFs
are inherently much more frequent than others.  We did not have
information about the overall frequency of the different SCFs in
biomedical text, so we used information about their overall frequency
in general language from the COMLEX and ANLT dictionaries, along with
empirical information about high and low frequencies from the
unfiltered lexicon acquired for biomedicine, to set a specific
threshold for each SCF.  Although this method gives more accurate
results overall, it uses information about general language which may
or may not be applicable to the biomedical domain.

\subsubsection{Release of BioVALEX lexicon}
% \subsubsection{Lexicon released with this paper}

Along with this paper we release BioVALEX, the lexicon obtained by
using the Cambridge tools on the PMC OA data.
% the full results of the Cambridge system on the PMC OA data.  
The archive contains a file for each subdomain we studied, as well as
an ``overall'' file for the entire PMC OA data set (note that the
latter is more than just the union of the former, since not all
journals are assigned a subdomain).  Each file has raw counts of each
verb/SCF combination in lines of the format ``VERB SCF COUNT''.  These
counts are unfiltered, and so include low-occurence verbs: practical
use of the resource will likely require filtering appropriate to the
task at hand.  The file ``subcat\_frames.xml'' describes the SCF
inventory used in this study, with cross-references to other common
inventories (e.g.~COMLEX, the BioLexicon, etc).
