<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">

<head>

<meta charset="utf-8">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="generator" content="pandoc" />

<meta name="author" content="Brian B. Avants" />

<meta name="date" content="2015-03-25" />

<title>Transformations and statistical representations for images in R</title>



<style type="text/css">code{white-space: pre;}</style>
<style type="text/css">
table.sourceCode, tr.sourceCode, td.lineNumbers, td.sourceCode {
  margin: 0; padding: 0; vertical-align: baseline; border: none; }
table.sourceCode { width: 100%; line-height: 100%; }
td.lineNumbers { text-align: right; padding-right: 4px; padding-left: 4px; color: #aaaaaa; border-right: 1px solid #aaaaaa; }
td.sourceCode { padding-left: 5px; }
code > span.kw { color: #007020; font-weight: bold; }
code > span.dt { color: #902000; }
code > span.dv { color: #40a070; }
code > span.bn { color: #40a070; }
code > span.fl { color: #40a070; }
code > span.ch { color: #4070a0; }
code > span.st { color: #4070a0; }
code > span.co { color: #60a0b0; font-style: italic; }
code > span.ot { color: #007020; }
code > span.al { color: #ff0000; font-weight: bold; }
code > span.fu { color: #06287e; }
code > span.er { color: #ff0000; font-weight: bold; }
</style>
<style type="text/css">
  pre:not([class]) {
    background-color: white;
  }
</style>


<link href="data:text/css,body%20%7B%0A%20%20background%2Dcolor%3A%20%23fff%3B%0A%20%20margin%3A%201em%20auto%3B%0A%20%20max%2Dwidth%3A%20700px%3B%0A%20%20overflow%3A%20visible%3B%0A%20%20padding%2Dleft%3A%202em%3B%0A%20%20padding%2Dright%3A%202em%3B%0A%20%20font%2Dfamily%3A%20%22Open%20Sans%22%2C%20%22Helvetica%20Neue%22%2C%20Helvetica%2C%20Arial%2C%20sans%2Dserif%3B%0A%20%20font%2Dsize%3A%2014px%3B%0A%20%20line%2Dheight%3A%201%2E35%3B%0A%7D%0A%0A%23header%20%7B%0A%20%20text%2Dalign%3A%20center%3B%0A%7D%0A%0A%23TOC%20%7B%0A%20%20clear%3A%20both%3B%0A%20%20margin%3A%200%200%2010px%2010px%3B%0A%20%20padding%3A%204px%3B%0A%20%20width%3A%20400px%3B%0A%20%20border%3A%201px%20solid%20%23CCCCCC%3B%0A%20%20border%2Dradius%3A%205px%3B%0A%0A%20%20background%2Dcolor%3A%20%23f6f6f6%3B%0A%20%20font%2Dsize%3A%2013px%3B%0A%20%20line%2Dheight%3A%201%2E3%3B%0A%7D%0A%20%20%23TOC%20%2Etoctitle%20%7B%0A%20%20%20%20font%2Dweight%3A%20bold%3B%0A%20%20%20%20font%2Dsize%3A%2015px%3B%0A%20%20%20%20margin%2Dleft%3A%205px%3B%0A%20%20%7D%0A%0A%20%20%23TOC%20ul%20%7B%0A%20%20%20%20padding%2Dleft%3A%2040px%3B%0A%20%20%20%20margin%2Dleft%3A%20%2D1%2E5em%3B%0A%20%20%20%20margin%2Dtop%3A%205px%3B%0A%20%20%20%20margin%2Dbottom%3A%205px%3B%0A%20%20%7D%0A%20%20%23TOC%20ul%20ul%20%7B%0A%20%20%20%20margin%2Dleft%3A%20%2D2em%3B%0A%20%20%7D%0A%20%20%23TOC%20li%20%7B%0A%20%20%20%20line%2Dheight%3A%2016px%3B%0A%20%20%7D%0A%0Atable%20%7B%0A%20%20margin%3A%201em%20auto%3B%0A%20%20border%2Dwidth%3A%201px%3B%0A%20%20border%2Dcolor%3A%20%23DDDDDD%3B%0A%20%20border%2Dstyle%3A%20outset%3B%0A%20%20border%2Dcollapse%3A%20collapse%3B%0A%7D%0Atable%20th%20%7B%0A%20%20border%2Dwidth%3A%202px%3B%0A%20%20padding%3A%205px%3B%0A%20%20border%2Dstyle%3A%20inset%3B%0A%7D%0Atable%20td%20%7B%0A%20%20border%2Dwidth%3A%201px%3B%0A%20%20border%2Dstyle%3A%20inset%3B%0A%20%20line%2Dheight%3A%2018px%3B%0A%20%20padding%3A%205px%205px%3B%0A%7D%0Atable%2C%20table%20th%2C%20table%20td%20%7B%0A%20%20border%2Dleft%2Dstyle%3A%20none%3B%0A%20%20border%2Dright%2Dstyle%3A%20none%3B%0A%7D%0Atable%20thead%2C%20table%20tr%2Eeven%20%7B%0A%20%20background%2Dcolor%3A%20%23f7f7f7%3B%0A%7D%0A%0Ap%20%7B%0A%20%20margin%3A%200%2E5em%200%3B%0A%7D%0A%0Ablockquote%20%7B%0A%20%20background%2Dcolor%3A%20%23f6f6f6%3B%0A%20%20padding%3A%200%2E25em%200%2E75em%3B%0A%7D%0A%0Ahr%20%7B%0A%20%20border%2Dstyle%3A%20solid%3B%0A%20%20border%3A%20none%3B%0A%20%20border%2Dtop%3A%201px%20solid%20%23777%3B%0A%20%20margin%3A%2028px%200%3B%0A%7D%0A%0Adl%20%7B%0A%20%20margin%2Dleft%3A%200%3B%0A%7D%0A%20%20dl%20dd%20%7B%0A%20%20%20%20margin%2Dbottom%3A%2013px%3B%0A%20%20%20%20margin%2Dleft%3A%2013px%3B%0A%20%20%7D%0A%20%20dl%20dt%20%7B%0A%20%20%20%20font%2Dweight%3A%20bold%3B%0A%20%20%7D%0A%0Aul%20%7B%0A%20%20margin%2Dtop%3A%200%3B%0A%7D%0A%20%20ul%20li%20%7B%0A%20%20%20%20list%2Dstyle%3A%20circle%20outside%3B%0A%20%20%7D%0A%20%20ul%20ul%20%7B%0A%20%20%20%20margin%2Dbottom%3A%200%3B%0A%20%20%7D%0A%0Apre%2C%20code%20%7B%0A%20%20background%2Dcolor%3A%20%23f7f7f7%3B%0A%20%20border%2Dradius%3A%203px%3B%0A%20%20color%3A%20%23333%3B%0A%7D%0Apre%20%7B%0A%20%20white%2Dspace%3A%20pre%2Dwrap%3B%20%20%20%20%2F%2A%20Wrap%20long%20lines%20%2A%2F%0A%20%20border%2Dradius%3A%203px%3B%0A%20%20margin%3A%205px%200px%2010px%200px%3B%0A%20%20padding%3A%2010px%3B%0A%7D%0Apre%3Anot%28%5Bclass%5D%29%20%7B%0A%20%20background%2Dcolor%3A%20%23f7f7f7%3B%0A%7D%0A%0Acode%20%7B%0A%20%20font%2Dfamily%3A%20Consolas%2C%20Monaco%2C%20%27Courier%20New%27%2C%20monospace%3B%0A%20%20font%2Dsize%3A%2085%25%3B%0A%7D%0Ap%20%3E%20code%2C%20li%20%3E%20code%20%7B%0A%20%20padding%3A%202px%200px%3B%0A%7D%0A%0Adiv%2Efigure%20%7B%0A%20%20text%2Dalign%3A%20center%3B%0A%7D%0Aimg%20%7B%0A%20%20background%2Dcolor%3A%20%23FFFFFF%3B%0A%20%20padding%3A%202px%3B%0A%20%20border%3A%201px%20solid%20%23DDDDDD%3B%0A%20%20border%2Dradius%3A%203px%3B%0A%20%20border%3A%201px%20solid%20%23CCCCCC%3B%0A%20%20margin%3A%200%205px%3B%0A%7D%0A%0Ah1%20%7B%0A%20%20margin%2Dtop%3A%200%3B%0A%20%20font%2Dsize%3A%2035px%3B%0A%20%20line%2Dheight%3A%2040px%3B%0A%7D%0A%0Ah2%20%7B%0A%20%20border%2Dbottom%3A%204px%20solid%20%23f7f7f7%3B%0A%20%20padding%2Dtop%3A%2010px%3B%0A%20%20padding%2Dbottom%3A%202px%3B%0A%20%20font%2Dsize%3A%20145%25%3B%0A%7D%0A%0Ah3%20%7B%0A%20%20border%2Dbottom%3A%202px%20solid%20%23f7f7f7%3B%0A%20%20padding%2Dtop%3A%2010px%3B%0A%20%20font%2Dsize%3A%20120%25%3B%0A%7D%0A%0Ah4%20%7B%0A%20%20border%2Dbottom%3A%201px%20solid%20%23f7f7f7%3B%0A%20%20margin%2Dleft%3A%208px%3B%0A%20%20font%2Dsize%3A%20105%25%3B%0A%7D%0A%0Ah5%2C%20h6%20%7B%0A%20%20border%2Dbottom%3A%201px%20solid%20%23ccc%3B%0A%20%20font%2Dsize%3A%20105%25%3B%0A%7D%0A%0Aa%20%7B%0A%20%20color%3A%20%230033dd%3B%0A%20%20text%2Ddecoration%3A%20none%3B%0A%7D%0A%20%20a%3Ahover%20%7B%0A%20%20%20%20color%3A%20%236666ff%3B%20%7D%0A%20%20a%3Avisited%20%7B%0A%20%20%20%20color%3A%20%23800080%3B%20%7D%0A%20%20a%3Avisited%3Ahover%20%7B%0A%20%20%20%20color%3A%20%23BB00BB%3B%20%7D%0A%20%20a%5Bhref%5E%3D%22http%3A%22%5D%20%7B%0A%20%20%20%20text%2Ddecoration%3A%20underline%3B%20%7D%0A%20%20a%5Bhref%5E%3D%22https%3A%22%5D%20%7B%0A%20%20%20%20text%2Ddecoration%3A%20underline%3B%20%7D%0A%0A%2F%2A%20Class%20described%20in%20https%3A%2F%2Fbenjeffrey%2Ecom%2Fposts%2Fpandoc%2Dsyntax%2Dhighlighting%2Dcss%0A%20%20%20Colours%20from%20https%3A%2F%2Fgist%2Egithub%2Ecom%2Frobsimmons%2F1172277%20%2A%2F%0A%0Acode%20%3E%20span%2Ekw%20%7B%20color%3A%20%23555%3B%20font%2Dweight%3A%20bold%3B%20%7D%20%2F%2A%20Keyword%20%2A%2F%0Acode%20%3E%20span%2Edt%20%7B%20color%3A%20%23902000%3B%20%7D%20%2F%2A%20DataType%20%2A%2F%0Acode%20%3E%20span%2Edv%20%7B%20color%3A%20%2340a070%3B%20%7D%20%2F%2A%20DecVal%20%28decimal%20values%29%20%2A%2F%0Acode%20%3E%20span%2Ebn%20%7B%20color%3A%20%23d14%3B%20%7D%20%2F%2A%20BaseN%20%2A%2F%0Acode%20%3E%20span%2Efl%20%7B%20color%3A%20%23d14%3B%20%7D%20%2F%2A%20Float%20%2A%2F%0Acode%20%3E%20span%2Ech%20%7B%20color%3A%20%23d14%3B%20%7D%20%2F%2A%20Char%20%2A%2F%0Acode%20%3E%20span%2Est%20%7B%20color%3A%20%23d14%3B%20%7D%20%2F%2A%20String%20%2A%2F%0Acode%20%3E%20span%2Eco%20%7B%20color%3A%20%23888888%3B%20font%2Dstyle%3A%20italic%3B%20%7D%20%2F%2A%20Comment%20%2A%2F%0Acode%20%3E%20span%2Eot%20%7B%20color%3A%20%23007020%3B%20%7D%20%2F%2A%20OtherToken%20%2A%2F%0Acode%20%3E%20span%2Eal%20%7B%20color%3A%20%23ff0000%3B%20font%2Dweight%3A%20bold%3B%20%7D%20%2F%2A%20AlertToken%20%2A%2F%0Acode%20%3E%20span%2Efu%20%7B%20color%3A%20%23900%3B%20font%2Dweight%3A%20bold%3B%20%7D%20%2F%2A%20Function%20calls%20%2A%2F%20%0Acode%20%3E%20span%2Eer%20%7B%20color%3A%20%23a61717%3B%20background%2Dcolor%3A%20%23e3d2d2%3B%20%7D%20%2F%2A%20ErrorTok%20%2A%2F%0A%0A" rel="stylesheet" type="text/css" />

</head>

<body>



<div id="header">
<h1 class="title">Transformations and statistical representations for images in R</h1>
<h4 class="author"><em>Brian B. Avants</em></h4>
<h4 class="date"><em>2015-03-25</em></h4>
</div>


<blockquote>
<p>“A small leak will sink a great ship.” (folk wisdom)</p>
</blockquote>
<div id="introduction" class="section level1">
<h1>Introduction</h1>
<p>The ANTs<em>R</em> package interfaces state of the art image processing with <em>R</em> statistical methods. The project grew out of the need, at University of Pennsylvania, to develop large-scale analytics pipelines that track provenance from scanner to scientific study. ANTs<em>R</em> achieves this by wrapping an ANTs and ITK C++ core via <code>Rcpp</code> <span class="citation">(Eddelbuettel 2013)</span>.</p>
<p><a href="http://www.itk.org/">ITK</a> is a templated C++ framework with I/O and support for arbitrary image types (usually 2, 3 or 4 dimensions) as well as surface representations. <a href="http://stnava.github.io/ANTs">ANTs</a>, built on ITK, focuses on multivariate image matching and segmentation as well as geometric (even high-dimensional) image transformation. Both tools are <a href="http://journal.frontiersin.org/ResearchTopic/1580">deeply validated and widely used</a>.</p>
<p>Together, these tools allow powerful image manipulations. However, they lack a true statistical back-end. Historically, statistical software was not amenable to direct manipulation of multiple dimensional images. This led to “in-house” statistical programming or, perhaps worse (for science), reliance on closed source commercial software. Given the increasing <a href="http://r4stats.com/articles/popularity/">popularity of <em>R</em></a> and prominence of quantitative imaging, it is natural that <em>R</em> should have a package focused on biological or medical image analysis.</p>
<p>This package integrates several frameworks for extracting quantitative information from images and mapping images into reference coordinate systems. Human brain mapping studies have long relied on Talairach-Tournoux and related coordinate systems <span class="citation">(TALAIRACH and TOURNOUX 1958)</span>. Similar standardized localization is becoming more common within non-human studies <span class="citation">(Johnson et al. 2010; Majka et al. 2013)</span>. Atlases of other organ systems are also emerging and being applied clinically <span class="citation">(<span>de Marvao</span> et al. 2014)</span>. This class of methods relies on <em>image transformation</em> and <em>image segmentation</em> as an aid to the ultimate goal of quantifying variability within and across populations. Longer term, such methods will be critical to individualized patient care and other translational applications.</p>
<div id="antsr-algorithms" class="section level2">
<h2>ANTs<em>R</em> Algorithms</h2>
<p>Here, we provide an overview of the methods available within ANTs<em>R</em>.</p>
<ul>
<li><p>core image processing and I/O: ITK <span class="citation">(B. B. Avants, Tustison, et al. 2014)</span>;</p></li>
<li><p>registration and utilities for image processing: ANTs mappings <span class="citation">(Tustison, Cook, et al. 2014)</span> and feature extraction <span class="citation">(Tustison, Shrinidhi, et al. 2014)</span>;</p></li>
<li><p>dimensionality reduction: Eigenanatomy <span class="citation">(Dhillon et al. 2014)</span> and SCCAN <span class="citation">(B. B. Avants, Libon, et al. 2014)</span>;</p></li>
<li><p>methods for ASL-based cerebral blood flow quantification <span class="citation">(Kandel et al. 2015)</span>;</p></li>
<li><p>neighborhood representations of images that enable rich statistical models <span class="citation">(Kandel et al. 2015)</span></p></li>
<li><p>core statistics and temporal filtering via <em>R</em> packages that is amenable to BOLD image processing</p></li>
</ul>
<p>In combination, these tools enable one to go from near-raw medical imaging data to a fully reproducible scientific publication <span class="citation">(Avants 2015)</span>.</p>
</div>
<div id="data-organization-and-access-in-antsr" class="section level2">
<h2>Data organization and access in ANTs<em>R</em></h2>
<p>This package uses an <code>antsImage</code> S4 class to hold pointers to ITK images. We convert <code>antsImage</code> objects to <em>R</em> objects before passing them to <em>R</em> statistical methods. E.g. we convert a <em>scalar</em> image to a vector, a collection of scalar images to a matrix or a time series image to a matrix. Currently, ANTs<em>R</em> does not explicitly represent images with vector valued voxels (e.g. tensor or warp images) although these may be supported in the future in a way that is similar to our current support for time series images. The large majority of images employed within ANTs<em>R</em> are of 2, 3 or 4 dimensions with <code>float</code> pixel types. This information is stored within the <code>antsImage</code> class.<br />A few example images are built into ANTs<em>R</em>, but more can be downloaded. See <code>?getANTsRData</code>.</p>
<pre class="sourceCode r"><code class="sourceCode r">img&lt;-<span class="kw">antsImageRead</span>( <span class="kw">getANTsRData</span>(<span class="st">&quot;r16&quot;</span>), <span class="dv">2</span> ) <span class="co"># built in image</span>
img</code></pre>
<pre><code>## antsImage
##   Pixel Type   : float 
##   Pixel Size   : 1 
##   Dimensions   : 256x256 
##   Voxel Spacing: 1x1 
##   Origin       : 0 0 
##   Direction    : 1 0 0 1</code></pre>
<p>Take a quick look at the image.</p>
<p><img src="" /></p>
</div>
<div id="contributions-of-the-package" class="section level2">
<h2>Contributions of the package</h2>
<p>ANTs<em>R</em> includes:</p>
<ul>
<li><p>An organizational system such that relatively small scripts may implement full studies</p></li>
<li>Implementation of foundational methods
<ul>
<li>Smoothing, temporal filtering, etc</li>
<li>functional image denoising via <code>compcor</code> and <code>*DenoiseR</code></li>
<li>flexible: easy to estimate voxel-wise statistical models</li>
</ul></li>
<li><p>Reference simulation data and examples distributed with the package</p></li>
<li>Interpretation of results
<ul>
<li>sparse low-dimensional predictors</li>
<li>anatomical labeling of predictors based on AAL and other coordinate systems</li>
</ul></li>
<li><p>Openness and reproducibility</p></li>
</ul>
<p>In total, ANTs<em>R</em> is a rigorous framework upon which one may build customized statistical implementations appropriate for large-scale functional, structural or combined functional and structural image analyses. Because much of the core is implemented with C++, the framework also remains efficient. Finally, note that <code>Rscript</code> allows one to send ANTs<em>R</em> scripts to clusters and take advantage of distributed computing resources.</p>
</div>
<div id="basic-antsr-functionality" class="section level2">
<h2>Basic ANTs<em>R</em> functionality</h2>
<p>Here, we quickly summarize ANTsR functionality and useful tools.</p>
<p><strong>The travis build system</strong></p>
<p>We test ANTs<em>R</em> regularly. The status of the build (and an expected build result) can be seen here: <a href="https://travis-ci.org/stnava/ANTsR"><img src="" alt="Build Status" /></a>. Take a look at the detailed log to see what one might expect if building ANTs<em>R</em> from source.</p>
<p><strong>Image input and output</strong></p>
<p>If nothing else, ANTs<em>R</em> makes it easy to read and write (medical) images and to map them into a format compatible with <em>R</em>. Formats we frequently use include jpg, tiff, mha, nii.gz and nrrd. However, only the last three have a proper physical space representation necessary for mapping. Below is an example of how we access this type of image and see its geometry. Check <code>antsImageWrite</code> for the primary supported I/O.</p>
<pre class="sourceCode r"><code class="sourceCode r">mnifilename&lt;-<span class="kw">getANTsRData</span>(<span class="st">&quot;r27&quot;</span>)
img&lt;-<span class="kw">antsImageRead</span>(mnifilename)
img</code></pre>
<pre><code>## antsImage
##   Pixel Type   : float 
##   Pixel Size   : 1 
##   Dimensions   : 256x256 
##   Voxel Spacing: 1x1 
##   Origin       : 0 0 
##   Direction    : 1 0 0 1</code></pre>
<pre class="sourceCode r"><code class="sourceCode r">retval&lt;-<span class="kw">antsImageWrite</span>(img,mnifilename)</code></pre>
<pre><code>## Exception caught during reference file writing 
## 
## itk::ExceptionObject (0x7faf2ad661a8)
## Location: &quot;unknown&quot; 
## File: /private/var/folders/63/79s26cb12wzc6pfjzjfj94s00000gn/T/RtmpvvGyqp/devtools57cf67a4b37b/stnava-ITKR-d244bf9/src/itks/Modules/IO/JPEG/src/itkJPEGImageIO.cxx
## Line: 454
## Description: itk::ERROR: JPEGImageIO(0x7faf2ad68290): JPEG supports unsigned char/int only</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">antsGetSpacing</span>(img)</code></pre>
<pre><code>## [1] 1 1</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">antsGetDirection</span>(img)</code></pre>
<pre><code>##      [,1] [,2]
## [1,]    1    0
## [2,]    0    1</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">antsGetOrigin</span>(img)</code></pre>
<pre><code>## [1] 0 0</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>(img[<span class="dv">120</span>,<span class="dv">122</span>]) <span class="co"># same type of thing in 3 or 4D</span></code></pre>
<pre><code>##      [,1]
## [1,]  182</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>(<span class="kw">max</span>(img))</code></pre>
<pre><code>## [1] 255</code></pre>
<p><strong>Index an image with a label</strong></p>
<p>Often, you would like to summarize or extract information from within a known region of an image with arbitrary shape but within a given intensity “zone”. We simulate this below and show a few accessors and type conversions.</p>
<pre class="sourceCode r"><code class="sourceCode r">gaussimg&lt;-<span class="kw">array</span>( <span class="dt">data=</span><span class="kw">rnorm</span>(<span class="dv">125</span>), <span class="dt">dim=</span><span class="kw">c</span>(<span class="dv">5</span>,<span class="dv">5</span>,<span class="dv">5</span>) )
arrayimg&lt;-<span class="kw">array</span>( <span class="dt">data=</span>(<span class="dv">1</span>:<span class="dv">125</span>), <span class="dt">dim=</span><span class="kw">c</span>(<span class="dv">5</span>,<span class="dv">5</span>,<span class="dv">5</span>) )
img&lt;-<span class="kw">as.antsImage</span>( arrayimg )
<span class="kw">print</span>( <span class="kw">max</span>(img) )</code></pre>
<pre><code>## [1] 125</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>( <span class="kw">mean</span>(img[ img &gt;<span class="st"> </span><span class="dv">50</span>  ]))</code></pre>
<pre><code>## [1] 88</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>( <span class="kw">max</span>(img[ img &gt;=<span class="st"> </span><span class="dv">50</span> &amp;<span class="st"> </span>img &lt;=<span class="st"> </span><span class="dv">99</span>  ]))</code></pre>
<pre><code>## [1] 99</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>( <span class="kw">mean</span>( gaussimg[ img &gt;=<span class="st"> </span><span class="dv">50</span> &amp;<span class="st"> </span>img &lt;=<span class="st"> </span><span class="dv">99</span>  ]) )</code></pre>
<pre><code>## [1] -0.1592707</code></pre>
<p><strong>Convert a 4D image to a matrix</strong></p>
<p>Four dimensional images are generated and used in the same way. One can easily transform from 4D image to matrix and back.</p>
<pre class="sourceCode r"><code class="sourceCode r">gaussimg&lt;-<span class="kw">makeImage</span>(<span class="kw">c</span>(<span class="dv">5</span>,<span class="dv">5</span>,<span class="dv">5</span>,<span class="dv">10</span>), <span class="dt">voxval =</span> <span class="kw">rnorm</span>(<span class="dv">125</span>*<span class="dv">10</span>)  )
<span class="kw">print</span>(<span class="kw">dim</span>(gaussimg))</code></pre>
<pre><code>## [1]  5  5  5 10</code></pre>
<pre class="sourceCode r"><code class="sourceCode r">avg3d&lt;-<span class="kw">getAverageOfTimeSeries</span>( gaussimg )
voxelselect &lt;-<span class="st"> </span>avg3d &lt;<span class="st"> </span><span class="fl">0.25</span>
mask&lt;-<span class="kw">antsImageClone</span>( avg3d )
mask[ voxelselect  ]&lt;-<span class="dv">0</span>
mask[ !voxelselect  ]&lt;-<span class="dv">1</span>
gmat&lt;-<span class="kw">timeseries2matrix</span>( gaussimg, mask )
<span class="kw">print</span>(<span class="kw">dim</span>(gmat))</code></pre>
<pre><code>## [1] 10 33</code></pre>
<p>If one has a mask, then one can use <code>makeImage</code> to generate a new image from a scalar or vector.</p>
<pre class="sourceCode r"><code class="sourceCode r">newimg&lt;-<span class="kw">makeImage</span>( mask, <span class="kw">mean</span>(avg3d) )    <span class="co"># from scalar</span>
newimg&lt;-<span class="kw">makeImage</span>( mask, <span class="kw">colMeans</span>(gmat) ) <span class="co"># from vector</span></code></pre>
<p><strong>Convert a list of images to a matrix</strong></p>
<p>Often, one has several scalar images that need to be accumulated for statistical processing. Here, we generate a simulated set of these images and then proceed to smooth them, store them in a list and convert them to a matrix after extracting the information of each image within a data-driven mask.</p>
<pre class="sourceCode r"><code class="sourceCode r">nimages&lt;-<span class="dv">100</span>
ilist&lt;-<span class="kw">list</span>()
for ( i in <span class="dv">1</span>:nimages )
{
  simimg&lt;-<span class="kw">makeImage</span>( <span class="kw">c</span>(<span class="dv">50</span>,<span class="dv">50</span>) , <span class="kw">rnorm</span>(<span class="dv">2500</span>) )
  simimg&lt;-<span class="kw">smoothImage</span>(simimg,<span class="fl">1.5</span>)
  ilist[i]&lt;-simimg
}
<span class="co"># get a mask from the first image</span>
mask&lt;-<span class="kw">getMask</span>( ilist[[<span class="dv">1</span>]],
  <span class="dt">lowThresh=</span><span class="kw">mean</span>(ilist[[<span class="dv">1</span>]]), <span class="dt">cleanup=</span><span class="ot">TRUE</span> )
mat&lt;-<span class="kw">imageListToMatrix</span>( ilist, mask )
<span class="kw">print</span>(<span class="kw">dim</span>(mat))</code></pre>
<pre><code>## [1] 100 205</code></pre>
<p>Once we have a matrix representation of our population, we might run a quick voxel-wise regression within the mask. Then we look at some summary statistics.</p>
<pre class="sourceCode r"><code class="sourceCode r">mat&lt;-<span class="kw">imageListToMatrix</span>( ilist, mask )
age&lt;-<span class="kw">rnorm</span>( <span class="kw">nrow</span>(mat) ) <span class="co"># simulated age</span>
gender&lt;-<span class="kw">rep</span>( <span class="kw">c</span>(<span class="st">&quot;F&quot;</span>,<span class="st">&quot;M&quot;</span>), <span class="kw">nrow</span>(mat)/<span class="dv">2</span> ) <span class="co"># simulated gender</span>
<span class="co"># this creates &quot;real&quot; but noisy effects to detect</span>
mat&lt;-mat*(age^<span class="dv">2</span>+<span class="kw">rnorm</span>(<span class="kw">nrow</span>(mat)))
mdl&lt;-<span class="kw">lm</span>( mat ~<span class="st"> </span>age +<span class="st"> </span>gender )
mdli&lt;-<span class="kw">bigLMStats</span>( mdl, <span class="fl">1.e-4</span> )
<span class="kw">print</span>(<span class="kw">names</span>(mdli))</code></pre>
<pre><code>## [1] &quot;fstat&quot;      &quot;pval.model&quot; &quot;beta&quot;       &quot;beta.std&quot;   &quot;beta.t&quot;    
## [6] &quot;beta.pval&quot;</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>(<span class="kw">rownames</span>(mdli$beta.t))</code></pre>
<pre><code>## [1] &quot;age&quot;     &quot;genderM&quot;</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>(<span class="kw">paste</span>(<span class="st">&quot;age&quot;</span>,<span class="kw">min</span>(<span class="kw">p.adjust</span>(mdli$beta.pval[<span class="dv">1</span>,]))))</code></pre>
<pre><code>## [1] &quot;age 4.43303574020047e-05&quot;</code></pre>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">print</span>(<span class="kw">paste</span>(<span class="st">&quot;gen&quot;</span>,<span class="kw">min</span>(<span class="kw">p.adjust</span>(mdli$beta.pval[<span class="dv">2</span>,]))))</code></pre>
<pre><code>## [1] &quot;gen 1&quot;</code></pre>
<p><strong>Write out a statistical map</strong></p>
<p>We might also write out the images so that we can save them for later or look at them with other software.</p>
<pre class="sourceCode r"><code class="sourceCode r">agebetas&lt;-<span class="kw">makeImage</span>( mask , mdli$beta.t[<span class="dv">1</span>,] )
returnval&lt;-<span class="kw">antsImageWrite</span>( agebetas, <span class="kw">tempfile</span>(<span class="dt">fileext =</span><span class="st">'.nii.gz'</span>) )</code></pre>
</div>
<div id="more-antsr-functionality" class="section level2">
<h2>More ANTs<em>R</em> functionality</h2>
<p>We achieve quantification in biological or medical imaging by using prior knowledge about the image content.</p>
<p><strong>Segmentation</strong></p>
<p>In segmentation, we assume the image has a known set of tissues, organs etc. Here, we assume 3 tissues exist and use a classic k-means model with MRF penalty <span class="citation">(Avants et al. 2011)</span>. Note that we also bias correct the image to help it match our model <span class="citation">(Tustison et al. 2010)</span>.</p>
<pre class="sourceCode r"><code class="sourceCode r">fi&lt;-<span class="kw">antsImageRead</span>( <span class="kw">getANTsRData</span>(<span class="st">&quot;r16&quot;</span>) ,<span class="dv">2</span>)
fi&lt;-<span class="kw">n3BiasFieldCorrection</span>(fi,<span class="dv">2</span>)
seg&lt;-<span class="kw">kmeansSegmentation</span>( fi, <span class="dv">3</span> )
<span class="kw">invisible</span>(<span class="kw">plot</span>(seg$segmentation))</code></pre>
<p><img src="" /></p>
<p>If you like segmentation, also look at <code>rfSegmentation</code> and <code>atropos</code>.</p>
<p><strong>Registration</strong></p>
<p>In registration, we assume the image can be mapped to some canonical shape or example, i.e. an atlas. Or to another individual. ANTs<em>R</em> provides a simple wrapper for SyN image registration <span class="citation">(Tustison and Avants 2013)</span>,</p>
<pre class="sourceCode r"><code class="sourceCode r">mi&lt;-<span class="kw">antsImageRead</span>( <span class="kw">getANTsRData</span>(<span class="st">&quot;r64&quot;</span>) ,<span class="dv">2</span>)
mytx&lt;-<span class="kw">antsRegistration</span>(<span class="dt">fixed=</span>fi , <span class="dt">moving=</span>mi ,
    <span class="dt">typeofTransform =</span> <span class="kw">c</span>(<span class="st">'SyN'</span>))
regresult&lt;-<span class="kw">iMath</span>(mytx$warpedmovout,<span class="st">&quot;Normalize&quot;</span>)
fiedge&lt;-<span class="kw">iMath</span>(fi,<span class="st">&quot;Canny&quot;</span>,<span class="dv">1</span>,<span class="dv">5</span>,<span class="dv">12</span>)
<span class="kw">invisible</span>(<span class="kw">plot</span>(regresult, <span class="kw">list</span>(fiedge), <span class="dt">window.overlay=</span><span class="kw">c</span>(<span class="fl">0.5</span>,<span class="dv">1</span>)) )</code></pre>
<p><img src="" /></p>
<p>while <code>invariantImageSimilarity</code> provides powerful multi-start search for lower dimensional affine registrations.</p>
<p>Deformable image registration results in a voxel-wise map of the contraction and expansion of the moving image (after affine transformation) that is needed to map to the fixed image. This deformation gradient is colloquially known as “the jacobian”.</p>
<pre class="sourceCode r"><code class="sourceCode r">jac&lt;-<span class="kw">createJacobianDeterminantImage</span>(fi,mytx$fwdtransforms[[<span class="dv">1</span>]],<span class="dv">1</span>)
<span class="kw">invisible</span>(<span class="kw">plot</span>(jac))</code></pre>
<p><img src="" /></p>
<p>Above, we compute and plot the image of the log-jacobian. This mapping is a useful summary measurement for morphometry <span class="citation">(Avants et al. 2012,<span class="citation">Kim et al. (2008)</span>)</span>.</p>
<p><strong>Registration and segmentation</strong></p>
<p>Registration and segmentation are often applied jointly or iteratively to maximize some criterion. See the example in <code>jointIntensityFusion</code> for one such case <span class="citation">(Wang and Yushkevich 2013)</span>.</p>
<p><strong>Neighborhood operations</strong></p>
<p>Basic I/O and management of images as vectors is critical. However, there is additional information that can be gained by representing an image and its neighborhood information. ANTs<em>R</em> represents image neighborhoods, which capture shape and texture, as a matrix. Here, extract an image neighborhood matrix representation such that we may analyze it at a given scale.</p>
<pre class="sourceCode r"><code class="sourceCode r">mnit&lt;-<span class="kw">getANTsRData</span>(<span class="st">&quot;r16&quot;</span>)
mnit&lt;-<span class="kw">antsImageRead</span>( mnit )
mnit &lt;-<span class="st"> </span><span class="kw">resampleImage</span>( mnit , <span class="kw">rep</span>(<span class="dv">4</span>, mnit@dimension) ) <span class="co"># downsample</span>
mask2&lt;-<span class="kw">getMask</span>(mnit,<span class="dt">lowThresh=</span><span class="kw">mean</span>(mnit),<span class="dt">cleanup=</span><span class="ot">TRUE</span>)
radius &lt;-<span class="st"> </span><span class="kw">rep</span>(<span class="dv">2</span>,mnit@dimension)
mat2&lt;-<span class="kw">getNeighborhoodInMask</span>(mnit, mask2, radius,
  <span class="dt">physical.coordinates =</span> <span class="ot">FALSE</span>,
  <span class="dt">boundary.condition =</span> <span class="st">&quot;mean&quot;</span> )
<span class="kw">print</span>(<span class="kw">dim</span>(mat2))</code></pre>
<pre><code>## [1]   25 1113</code></pre>
<p>The variable <code>mat2</code> has size determined by the neighborhood radius (here, 5) and the number of non-zero voxels in the mask. The <code>boundary.condition</code> says how to treat data that is outside of the mask or the image boundaries. This example replaces missing data with the mean in-mask value of the local neighborhood.</p>
<p>Other useful tools in ANTs<em>R</em> include <code>iMath</code>, <code>thresholdImage</code>, <code>quantifyCBF</code>, <code>preprocessfMRI</code>, <code>aslPerfusion</code>, <code>computeDVARS</code>, <code>getROIValues</code>, <code>hemodynamicRF</code>, <code>makeGraph</code>, <code>matrixToImages</code>, <code>rfSegmentation</code>, <code>antsRegistration</code>, <code>plotPrettyGraph</code>, <code>plotBasicNetwork</code>, <code>getTemplateCoordinates</code>, <code>antsSet*</code>.</p>
<p>Several image mathematics operations (like <code>ImageMath</code> in ANTs) are accessible too via <code>iMath</code>.</p>
</div>
<div id="example-label-sets-and-data" class="section level2">
<h2>Example label sets and data</h2>
<p>ANTs<em>R</em> also provides AAL label <span class="citation">(Tzourio-Mazoyer et al. 2002)</span> names via:</p>
<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">data</span>(aal,<span class="dt">package=</span><span class="st">'ANTsR'</span>)
labs&lt;-<span class="dv">1</span>:<span class="dv">90</span></code></pre>
<p>with cortical labs defined by <code>labs</code>. The DKT atlas labels are similarly summarized in <code>DesikanKillianyTourville</code> <span class="citation">(Klein and Tourville 2012)</span>.<br />An example BOLD correlation matrix is available in <code>bold_correlation_matrix</code>. This can be used to try out <code>makeGraph</code> and related functions.</p>
</div>
<div id="visualization-and-plotting" class="section level2">
<h2>Visualization and plotting</h2>
<p>The basic <code>plot</code> function is implemented for the <code>antsImage</code> class. It can show 2 or 3D data with color overlays, the latter of which can display multiple slices side by side. Several color choices are available for the overlays.</p>
<p>For 3D images, see <code>renderSurfaceFunction</code> and <code>plotBasicNetwork</code> for <code>rgl</code> and <code>misc3d</code> based interactive surface and network plots. Another such example is in <code>visualizeBlob</code>. These are too long-running to compile into the vignette but the help examples for these functions will allow you to see their results.</p>
<p>A good visualization alternative outside of ANTs<em>R</em> is <a href="https://github.com/stnava/antsSurf">antsSurf</a>.</p>
</div>
<div id="bold-data-processing-with-antsr" class="section level2">
<h2>BOLD data processing with ANTs<em>R</em></h2>
<p>Good approaches exist in ANTs<em>R</em> for preprocessing BOLD data. These yield both motion matrices and relevant summary measurements such as FD and DVARS. See <code>?preprocessfMRI</code> for a simplified utility function. This function could be used on each run of an experiment and the results stored in organized fashion for later use.</p>
<p><strong>Motion correction</strong></p>
<p>To motion correct your data, one might run:</p>
<pre class="sourceCode r"><code class="sourceCode r"><span class="co"># get an average image</span>
averageImage &lt;-<span class="st"> </span><span class="kw">getAverageOfTimeSeries</span>( boldImage )
motionCorrectionResults &lt;-<span class="st"> </span><span class="kw">antsMotionCalculation</span>( boldImage,
   <span class="dt">fixed =</span> averageImage )</code></pre>
<p>A <code>moreaccurate</code> flag should be set to <code>1</code> or <code>2</code> for usable (not test) results. FD and DVARS are returned which may be used to summarize motion. One might also get this data from <code>preprocessfMRI</code> which also provides denoising options based on data-driven methods including frequency filtering.</p>
<p>For more fMRI focused tools, see <a href="http://stnava.github.io/RKRNS/">RKRNS</a> and its github site <a href="https://github.com/stnava/RKRNS">github RKRNS</a>.</p>
</div>
<div id="dimensionality-reduction" class="section level2">
<h2>Dimensionality reduction</h2>
<p>Images often have many voxels (<span class="math">\(p\)</span>-voxels) and, in medical applications, this means that <span class="math">\(p&gt;n\)</span> or even <span class="math">\(p&gt;&gt;n\)</span>, where <span class="math">\(n\)</span> is the number of subjects. Therefore, we often want to “intelligently” reduce the dimensionality of the data. We favor methods related to PCA and CCA but have a few ICA related tools too.</p>
<p><strong>Eigenanatomy &amp; SCCAN</strong></p>
<p>Our sparse and geometrically constrained dimensionality reduction methods seek to both explain variance and also yield interpretable, spatially localized pseudo-eigenvectors <span class="citation">(Kandel et al. 2014,<span class="citation">Cook et al. (2014)</span>)</span>. This is the point of “eigenanatomy” which is a variation of sparse PCA that uses (optionally) biologically-motivated smoothness, locality or sparsity constraints.</p>
<pre class="sourceCode r"><code class="sourceCode r"><span class="co"># assume you ran the population example above</span>
eanat&lt;-<span class="kw">sparseDecom</span>( mat, mask, <span class="fl">0.2</span>, <span class="dv">5</span>, <span class="dt">cthresh=</span><span class="dv">2</span>, <span class="dt">its=</span><span class="dv">2</span> )
eseg&lt;-<span class="kw">eigSeg</span>(mask,eanat$eig,F)
jeanat&lt;-<span class="kw">joinEigenanatomy</span>(mat,mask,eanat$eig, <span class="kw">c</span>(<span class="fl">0.1</span>))
eseg2&lt;-<span class="kw">eigSeg</span>(mask,jeanat$fusedlist,F)</code></pre>
<p>The parameters for the example above are set for fast processing. You can see our paper for some theory on these methods <span class="citation">(Kandel et al. 2014)</span>. A more realistic study setup would be</p>
<pre class="sourceCode r"><code class="sourceCode r">eanat&lt;-<span class="kw">sparseDecom</span>( <span class="dt">inmatrix=</span>mat, <span class="dt">inmask=</span>famask, <span class="dt">nvecs=</span><span class="dv">50</span>,
  <span class="dt">sparseness=</span><span class="fl">0.005</span>, <span class="dt">cthresh=</span><span class="dv">500</span>, <span class="dt">its=</span><span class="dv">5</span>, <span class="dt">mycoption=</span><span class="dv">0</span> )
jeanat&lt;-<span class="kw">joinEigenanatomy</span>( mat , famask, eanat$eig,
  <span class="kw">c</span>(<span class="dv">1</span>:<span class="dv">20</span>)/<span class="fl">100.0</span> , <span class="dt">joinMethod=</span><span class="st">'multilevel'</span> )
useeig&lt;-eanat$eig
useeig&lt;-jeanat$fusedlist
avgmat&lt;-<span class="kw">abs</span>(<span class="kw">imageListToMatrix</span>( useeig , famask ))
avgmat&lt;-avgmat/<span class="kw">rowSums</span>(<span class="kw">abs</span>(avgmat))
imgmat&lt;-(  mat %*%<span class="st"> </span><span class="kw">t</span>(avgmat)  )</code></pre>
<p>The <code>imgmat</code> variable would be your summary predictors entered into <code>lm</code> or <code>randomForest</code>.</p>
<p>More information is available within the examples that can be seen within the help for <code>sparseDecom</code>, <code>sparseDecom2</code> and the helper function <code>initializeEigenanatomy</code>.</p>
<p><strong>Sparse canonical correlation analysis</strong></p>
<p>CCA maximizes <span class="math">\(PearsonCorrelation( XW^T, ZY^T )\)</span> where <span class="math">\(X, W\)</span> are as above and <span class="math">\(Z\)</span> and <span class="math">\(Y\)</span> are similarly defined. CCA optimizes the matrices <span class="math">\(W, Y\)</span> operating on <span class="math">\(X, Z\)</span> to find a low-dimensional representation of the data pair <span class="math">\(( X , Z )\)</span> in which correlation is maximal. Following ideas outlined in <span class="citation">Dhillon et al. (2014)</span> and <span class="citation">B. B. Avants, Libon, et al. (2014)</span>, this method can be extended with sparsity constraints that yield rows of <span class="math">\(W, Y\)</span> with a controllable number of non-zero entries. See the <a href="http://stnava.github.io/sccanTutorial/">sccan tutorial</a> and <code>sparseDecom2</code> for more information.</p>
</div>
<div id="conclusions" class="section level2">
<h2>Conclusions</h2>
<p>With the current ANTs<em>R</em>, one may:</p>
<ul>
<li><p>Exploit ANTs and ITK functionality within <em>R</em></p></li>
<li><p>Leverage <em>R</em> functionality to help understand and interpret imaging data</p></li>
<li><p>Use feature selection based on various filtering strategies in <code>iMath</code> and elsewhere (e.g <code>segmentShapeFromImage</code>)</p></li>
<li><p>Employ dimensionality reduction through eigenanatomy or SCCAN with a variety of incarnations, some of which are similar to ICA</p></li>
<li><p>Use relatively few interpretable and low-dimensional predictors derived from high-dimensional data.</p></li>
<li><p>Interpret multivariate results intuitively when used in combination with standard <em>R</em> visualization.</p></li>
</ul>
<p>See <a href="https://github.com/stnava/ANTsR">ANTsR</a> for all source code and documentation and <a href="http://stnava.github.io/RKRNS">RKRNS-talk</a> for html slides that discuss extensions to BOLD decoding.</p>
<p>Enjoy and please refer issues to <a href="https://github.com/stnava/ANTsR/issues">ANTs<em>R</em> issues</a>.</p>
<div class="references">
<h1>References</h1>
<p>Avants, Brian B. et al. 2015. “The Pediatric Template of Brain Perfusion.” <em>Sci. Data</em>.</p>
<p>Avants, Brian B., David J. Libon, Katya Rascovsky, Ashley Boller, Corey T. McMillan, Lauren Massimo, H Branch Coslett, Anjan Chatterjee, Rachel G. Gross, and Murray Grossman. 2014. “Sparse Canonical Correlation Analysis Relates Network-Level Atrophy to Multivariate Cognitive Measures in a Neurodegenerative Population.” <em>Neuroimage</em> 84 (Jan). Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, PA, USA.: 698–711. doi:<a href="http://dx.doi.org/10.1016/j.neuroimage.2013.09.048">10.1016/j.neuroimage.2013.09.048</a>.</p>
<p>Avants, Brian B., Nicholas J. Tustison, Michael Stauffer, Gang Song, Baohua Wu, and James C. Gee. 2014. “The Insight ToolKit Image Registration Framework.” <em>Front Neuroinform</em> 8. Penn Image Computing; Science Laboratory, Department of Radiology, University of Pennsylvania Philadelphia, PA, USA.: 44. doi:<a href="http://dx.doi.org/10.3389/fninf.2014.00044">10.3389/fninf.2014.00044</a>.</p>
<p>Avants, Brian B., Nicholas J. Tustison, Jue Wu, Philip A. Cook, and James C. Gee. 2011. “An Open Source Multivariate Framework for N-Tissue Segmentation with Evaluation on Public Data.” <em>Neuroinformatics</em> 9 (4). Penn Image Computing; Science Laboratory, University of Pennsylvania, 3600 Market Street, Suite 370, Philadelphia, PA 19104, USA. stnava@gmail.com: 381–400. doi:<a href="http://dx.doi.org/10.1007/s12021-011-9109-y">10.1007/s12021-011-9109-y</a>.</p>
<p>Avants, Brian, Paramveer Dhillon, Benjamin M. Kandel, Philip A. Cook, Corey T. McMillan, Murray Grossman, and James C. Gee. 2012. “Eigenanatomy Improves Detection Power for Longitudinal Cortical Change.” <em>Med Image Comput Comput Assist Interv</em> 15 (Pt 3). Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.: 206–13.</p>
<p>Cook, Philip A., Corey T. McMillan, Brian B. Avants, Jonathan E. Peelle, James C. Gee, and Murray Grossman. 2014. “Relating Brain Anatomy and Cognitive Ability Using a Multivariate Multimodal Framework.” <em>Neuroimage</em>, May. Penn Frontotemporal Degeneration Center, Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA. doi:<a href="http://dx.doi.org/10.1016/j.neuroimage.2014.05.008">10.1016/j.neuroimage.2014.05.008</a>.</p>
<p><span>de Marvao</span>, Antonio, Timothy J W. Dawes, Wenzhe Shi, Christopher Minas, Niall G. Keenan, Tamara Diamond, Giuliana Durighel, et al. 2014. “Population-Based Studies of Myocardial Hypertrophy: High Resolution Cardiovascular Magnetic Resonance Atlases Improve Statistical Power.” <em>J Cardiovasc Magn Reson</em> 16. th Hospital Campus, Du Cane Road, London W12 0NN, UK. antonio.de-marvao10@imperial.ac.uk.: 16. doi:<a href="http://dx.doi.org/10.1186/1532-429X-16-16">10.1186/1532-429X-16-16</a>.</p>
<p>Dhillon, Paramveer S., David A. Wolk, Sandhitsu R. Das, Lyle H. Ungar, James C. Gee, and Brian B. Avants. 2014. “Subject-Specific Functional Parcellation via Prior Based Eigenanatomy.” <em>Neuroimage</em>, May. Penn Image Computing; Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA. doi:<a href="http://dx.doi.org/10.1016/j.neuroimage.2014.05.026">10.1016/j.neuroimage.2014.05.026</a>.</p>
<p>Eddelbuettel, Dirk. 2013. <em>Seamless R and C++ Integration with Rcpp</em>. New York: Springer.</p>
<p>Johnson, G Allan, Alexandra Badea, Jeffrey Brandenburg, Gary Cofer, Boma Fubara, Song Liu, and Jonathan Nissanov. 2010. “Waxholm Space: an Image-Based Reference for Coordinating Mouse Brain Research.” <em>Neuroimage</em> 53 (2). Duke Center for In Vivo Microscopy, Radiology, Duke University Medical Center, Durham, NC 27710, USA. gjohnson@duke.edu: 365–72. doi:<a href="http://dx.doi.org/10.1016/j.neuroimage.2010.06.067">10.1016/j.neuroimage.2010.06.067</a>.</p>
<p>Kandel, Benjamin M., Danny J J. Wang, John A. Detre, James C. Gee, and Brian B. Avants. 2015. “Decomposing Cerebral Blood Flow MRI into Functional and Structural Components: a Non-Local Approach Based on Prediction.” <em>Neuroimage</em> 105 (Jan). Penn Image Computing; Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA.: 156–70. doi:<a href="http://dx.doi.org/10.1016/j.neuroimage.2014.10.052">10.1016/j.neuroimage.2014.10.052</a>.</p>
<p>Kandel, Benjamin M., Danny J J. Wang, James C. Gee, and Brian B. Avants. 2014. “Eigenanatomy: Sparse Dimensionality Reduction for Multi-Modal Medical Image Analysis.” <em>Methods</em>, Oct. Penn Image Computing; Science Laboratory, University of Pennsylvania, Philadelphia, PA, United States; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, United States. doi:<a href="http://dx.doi.org/10.1016/j.ymeth.2014.10.016">10.1016/j.ymeth.2014.10.016</a>.</p>
<p>Kim, Junghoon, Brian Avants, Sunil Patel, John Whyte, Branch H. Coslett, John Pluta, John A. Detre, and James C. Gee. 2008. “Structural Consequences of Diffuse Traumatic Brain Injury: a Large Deformation Tensor-Based Morphometry Study.” <em>Neuroimage</em> 39 (3). Moss Rehabilitation Research Institute, Albert Einstein Healthcare Network, Philadelphia, PA 19141, USA. kimj@einstein.edu: 1014–26. doi:<a href="http://dx.doi.org/10.1016/j.neuroimage.2007.10.005">10.1016/j.neuroimage.2007.10.005</a>.</p>
<p>Klein, Arno, and Jason Tourville. 2012. “101 Labeled Brain Images and a Consistent Human Cortical Labeling Protocol.” <em>Front Neurosci</em> 6. of Medicine Stony Brook, NY, USA ; Department of Psychiatry, Columbia University New York, NY, USA.: 171. doi:<a href="http://dx.doi.org/10.3389/fnins.2012.00171">10.3389/fnins.2012.00171</a>.</p>
<p>Majka, Piotr, Jakub M. Kowalski, Natalia Chlodzinska, and Daniel K. W<span>ó</span>jcik. 2013. “3D Brain Atlas Reconstructor Service–online Repository of Three-Dimensional Models of Brain Structures.” <em>Neuroinformatics</em> 11 (4). Nencki Institute of Experimental Biology, 3 Pasteur Street, 02-093, Warsaw, Poland, p.majka@nencki.gov.pl.: 507–18. doi:<a href="http://dx.doi.org/10.1007/s12021-013-9199-9">10.1007/s12021-013-9199-9</a>.</p>
<p>TALAIRACH, J., and P. TOURNOUX. 1958. “[Stereotaxic Localization of Central Gray Nuclei].” <em>Neurochirurgia (Stuttg)</em> 1 (1): 88–93. doi:<a href="http://dx.doi.org/10.1055/s-0028-1095515">10.1055/s-0028-1095515</a>.</p>
<p>Tustison, Nicholas J., and Brian B. Avants. 2013. “Explicit B-Spline Regularization in Diffeomorphic Image Registration.” <em>Front Neuroinform</em> 7. Penn Image Computing; Science Laboratory, Department of Radiology, University of Pennsylvania Philadelphia, PA, USA.: 39. doi:<a href="http://dx.doi.org/10.3389/fninf.2013.00039">10.3389/fninf.2013.00039</a>.</p>
<p>Tustison, Nicholas J., Brian B. Avants, Philip A. Cook, Yuanjie Zheng, Alexander Egan, Paul A. Yushkevich, and James C. Gee. 2010. “N4ITK: Improved N3 Bias Correction.” <em>IEEE Trans Med Imaging</em> 29 (6). Department of Radiology, University of Pennsylvania, Philadelphia, PA 19140, USA. ntustison@wustl.edu: 1310–20. doi:<a href="http://dx.doi.org/10.1109/TMI.2010.2046908">10.1109/TMI.2010.2046908</a>.</p>
<p>Tustison, Nicholas J., Philip A. Cook, Arno Klein, Gang Song, Sandhitsu R. Das, Jeffrey T. Duda, Benjamin M. Kandel, et al. 2014. “Large-Scale Evaluation of ANTs and FreeSurfer Cortical Thickness Measurements.” <em>Neuroimage</em> 99 (Oct). Penn Image Computing; Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA.: 166–79. doi:<a href="http://dx.doi.org/10.1016/j.neuroimage.2014.05.044">10.1016/j.neuroimage.2014.05.044</a>.</p>
<p>Tustison, Nicholas J., K. L. Shrinidhi, Max Wintermark, Christopher R. Durst, Benjamin M. Kandel, James C. Gee, Murray C. Grossman, and Brian B. Avants. 2014. “Optimal Symmetric Multimodal Templates and Concatenated Random Forests for Supervised Brain Tumor Segmentation (Simplified) with ANTsR.” <em>Neuroinformatics</em>, Nov. Department of Radiology; Medical Imaging, University of Virginia, Charlottesville, VA, USA, ntustison@virginia.edu. doi:<a href="http://dx.doi.org/10.1007/s12021-014-9245-2">10.1007/s12021-014-9245-2</a>.</p>
<p>Tzourio-Mazoyer, N., B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer, and M. Joliot. 2002. “Automated Anatomical Labeling of Activations in SPM Using a Macroscopic Anatomical Parcellation of the MNI MRI Single-Subject Brain.” <em>Neuroimage</em> 15 (1). Groupe d’Imagerie Neurofonctionnelle, UMR 6095 CNRS CEA, Université de Caen, Université de Paris 5, France.: 273–89. doi:<a href="http://dx.doi.org/10.1006/nimg.2001.0978">10.1006/nimg.2001.0978</a>.</p>
<p>Wang, Hongzhi, and Paul A. Yushkevich. 2013. “Multi-Atlas Segmentation with Joint Label Fusion and Corrective Learning-an Open Source Implementation.” <em>Front Neuroinform</em> 7. Department of Radiology, PICSL, Perelman School of Medicine at the University of Pennsylvania Philadelphia, PA, USA.: 27. doi:<a href="http://dx.doi.org/10.3389/fninf.2013.00027">10.3389/fninf.2013.00027</a>.</p>
</div>
</div>
</div>



<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
  (function () {
    var script = document.createElement("script");
    script.type = "text/javascript";
    script.src  = "https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
    document.getElementsByTagName("head")[0].appendChild(script);
  })();
</script>

</body>
</html>
