This section describes our approach to quantifying differences in verb
subcategorization behavior 
across
% between 
subdomains of biomedicine.  The primary type of data that we
investigate is a verb's {\it SCF distribution}, that is, the
probability distribution representing the relative frequency 
% (within a given subdomain) 
of the
verb appearing in each SCF in the Cambridge inventory.
Our goal is to discover
% illustrate 
the presence or absence of significant differences between a verb's
SCF distribution in different subdomains. By investigating whether
individual verbs exhibit specialized behavior across subdomains, we build up
an overall picture of subdomain variation in verb subcategorization.

To obtain the SCF distributions we use the Cambridge system, applied
to 37 input corpora consisting of the PMC OA data from the individual
subdomains, to produce a per-verb SCF distribution for each subdomain.
We compute a distance metric between the per-verb SCF distributions for each pair of subdomains. 
% This
% involves computing a distance metric
% between them. 
We use clustering and graphical methods to illustrate the results.
% clustering the results, and presenting them graphically.

\subsubsection{Measuring divergence}
To measure the distance between two SCF distributions we use the Jensen-Shannon divergence (JSD) \citep{Grosse:02}, a finite and symmetric measurement of divergence between probability distributions, defined as:
\[
JSD = H(X + Y) - H(X) - H(Y)
\]
where \( H \) is the Shannon entropy of a distribution
\[
\sum_x x \log{x}
\]
JSD values range between 0 (identical distributions) and 1 (disjoint distributions), and is closely related to the familiar, but asymmetric, Kullback-Leibler divergence \citep{cover:91}.  We calculate the JSD between a given verb's SCF distributions for each pair of subdomains.

\subsubsection{Presentation}
We present detailed results for six verbs: {\it develop}, {\it express}, {\it
perform}, {\it predict}, {\it recognize} and {\it treat}.  These verbs
were chosen because they exemplify one or more interesting
characteristics, such as sharp divergence in a single subdomain or a
wide variety across all subdomains; there was a wide variety in the amount of variation exhibited 
% by different verbs 
(see Section~\ref{subdomain_subcat_behavior}). For each of the six verbs there are four
different views of the data, described below.  For a given verb, we
only show subdomains in which it occurs a minimum of 200 times.
%  We
% show four representations of subcategorization behavior across
% subdomains: heat maps for pairwise JSD, dendrograms for hierarchical
% clustering of subdomains, scatter plots for optimal K-Means
% clustering, and the top three SCFs and relative frequencies by
% subdomain.

Heat maps present pairwise calculations of a metric between a set of objects: cell \( <x, y> \) is shaded according to the value of \( metric(x, y) \).  Our heat maps show the JSD values between pairs of subdomains for a given verb: the cells are shaded from white (JSD value of 1, maximum divergence) to black (JSD value of 0, identity).  The actual values are inscribed in each cell.

Dendrograms present the results of hierarchical clustering performed directly on the JSD values.  The algorithm begins with each instance (in our case, subdomains) as a singleton cluster, and repeatedly joins the two most similar clusters until all the data is clustered together.  The order of these merges is recorded as a tree structure that can be visualised as a dendrogram in which the length of a branch represents the distance between its child nodes.  Similarity between clusters is calculated using average cosine distance between all members, known as ``average linking''.  The tree leaves represent data instances (subdomains) and the paths between them are proportional to the pairwise distance.  This allows visualization of multiple potential clusterings, as well as a more intuitive sense of how distinct clusters truly are.  Rather than choosing a set number of flat clusters, the trees mirror the nested structure of the data.

Scatter plots project the optimal K-Means clustering onto the first two principal components of the data.  The optimal clustering was determined via the Gap Statistic \citep{Tibshirani:01}, which increases the cluster count and runs K-Means until the improvement in error on the data is within a small range of the improvement on randomly-generated data with similar statistical properties.  The principal components are normalised, and points coloured according to cluster membership, with the subdomain written immediately above.  The clustering is performed using the full SCF distributions, while the principle component analysis relies on decomposing the distributions into two optimal dimensions.

Finally, tables show the top three SCFs for each subdomain, along with their relative frequencies.  The SCFs are shown in their equivalent COMLEX forms, which reflect the complements involved, as described in Section \ref{investigation_subcat}.

