<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">

<head>

<meta charset="utf-8" />
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="generator" content="pandoc" />

<meta name="viewport" content="width=device-width, initial-scale=1">

<meta name="author" content="Jared Huling" />

<meta name="date" content="2018-09-14" />

<title>Usage of the Personalized Package</title>



<style type="text/css">code{white-space: pre;}</style>
<style type="text/css">
a.sourceLine { display: inline-block; line-height: 1.25; }
a.sourceLine { pointer-events: none; color: inherit; text-decoration: inherit; }
a.sourceLine:empty { height: 1.2em; }
.sourceCode { overflow: visible; }
code.sourceCode { white-space: pre; position: relative; }
div.sourceCode { margin: 1em 0; }
pre.sourceCode { margin: 0; }
@media screen {
div.sourceCode { overflow: auto; }
}
@media print {
code.sourceCode { white-space: pre-wrap; }
a.sourceLine { text-indent: -1em; padding-left: 1em; }
}
pre.numberSource a.sourceLine
  { position: relative; left: -4em; }
pre.numberSource a.sourceLine::before
  { content: attr(data-line-number);
    position: relative; left: -1em; text-align: right; vertical-align: baseline;
    border: none; pointer-events: all; display: inline-block;
    -webkit-touch-callout: none; -webkit-user-select: none;
    -khtml-user-select: none; -moz-user-select: none;
    -ms-user-select: none; user-select: none;
    padding: 0 4px; width: 4em;
    color: #aaaaaa;
  }
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa;  padding-left: 4px; }
div.sourceCode
  {  }
@media screen {
a.sourceLine::before { text-decoration: underline; }
}
code span.al { color: #ff0000; font-weight: bold; } /* Alert */
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
code span.at { color: #7d9029; } /* Attribute */
code span.bn { color: #40a070; } /* BaseN */
code span.bu { } /* BuiltIn */
code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
code span.ch { color: #4070a0; } /* Char */
code span.cn { color: #880000; } /* Constant */
code span.co { color: #60a0b0; font-style: italic; } /* Comment */
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
code span.do { color: #ba2121; font-style: italic; } /* Documentation */
code span.dt { color: #902000; } /* DataType */
code span.dv { color: #40a070; } /* DecVal */
code span.er { color: #ff0000; font-weight: bold; } /* Error */
code span.ex { } /* Extension */
code span.fl { color: #40a070; } /* Float */
code span.fu { color: #06287e; } /* Function */
code span.im { } /* Import */
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
code span.kw { color: #007020; font-weight: bold; } /* Keyword */
code span.op { color: #666666; } /* Operator */
code span.ot { color: #007020; } /* Other */
code span.pp { color: #bc7a00; } /* Preprocessor */
code span.sc { color: #4070a0; } /* SpecialChar */
code span.ss { color: #bb6688; } /* SpecialString */
code span.st { color: #4070a0; } /* String */
code span.va { color: #19177c; } /* Variable */
code span.vs { color: #4070a0; } /* VerbatimString */
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
</style>



<style type="text/css">body {
background-color: #fff;
margin: 1em auto;
max-width: 700px;
overflow: visible;
padding-left: 2em;
padding-right: 2em;
font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;
font-size: 14px;
line-height: 1.35;
}
#header {
text-align: center;
}
#TOC {
clear: both;
margin: 0 0 10px 10px;
padding: 4px;
width: 400px;
border: 1px solid #CCCCCC;
border-radius: 5px;
background-color: #f6f6f6;
font-size: 13px;
line-height: 1.3;
}
#TOC .toctitle {
font-weight: bold;
font-size: 15px;
margin-left: 5px;
}
#TOC ul {
padding-left: 40px;
margin-left: -1.5em;
margin-top: 5px;
margin-bottom: 5px;
}
#TOC ul ul {
margin-left: -2em;
}
#TOC li {
line-height: 16px;
}
table {
margin: 1em auto;
border-width: 1px;
border-color: #DDDDDD;
border-style: outset;
border-collapse: collapse;
}
table th {
border-width: 2px;
padding: 5px;
border-style: inset;
}
table td {
border-width: 1px;
border-style: inset;
line-height: 18px;
padding: 5px 5px;
}
table, table th, table td {
border-left-style: none;
border-right-style: none;
}
table thead, table tr.even {
background-color: #f7f7f7;
}
p {
margin: 0.5em 0;
}
blockquote {
background-color: #f6f6f6;
padding: 0.25em 0.75em;
}
hr {
border-style: solid;
border: none;
border-top: 1px solid #777;
margin: 28px 0;
}
dl {
margin-left: 0;
}
dl dd {
margin-bottom: 13px;
margin-left: 13px;
}
dl dt {
font-weight: bold;
}
ul {
margin-top: 0;
}
ul li {
list-style: circle outside;
}
ul ul {
margin-bottom: 0;
}
pre, code {
background-color: #f7f7f7;
border-radius: 3px;
color: #333;
white-space: pre-wrap; 
}
pre {
border-radius: 3px;
margin: 5px 0px 10px 0px;
padding: 10px;
}
pre:not([class]) {
background-color: #f7f7f7;
}
code {
font-family: Consolas, Monaco, 'Courier New', monospace;
font-size: 85%;
}
p > code, li > code {
padding: 2px 0px;
}
div.figure {
text-align: center;
}
img {
background-color: #FFFFFF;
padding: 2px;
border: 1px solid #DDDDDD;
border-radius: 3px;
border: 1px solid #CCCCCC;
margin: 0 5px;
}
h1 {
margin-top: 0;
font-size: 35px;
line-height: 40px;
}
h2 {
border-bottom: 4px solid #f7f7f7;
padding-top: 10px;
padding-bottom: 2px;
font-size: 145%;
}
h3 {
border-bottom: 2px solid #f7f7f7;
padding-top: 10px;
font-size: 120%;
}
h4 {
border-bottom: 1px solid #f7f7f7;
margin-left: 8px;
font-size: 105%;
}
h5, h6 {
border-bottom: 1px solid #ccc;
font-size: 105%;
}
a {
color: #0033dd;
text-decoration: none;
}
a:hover {
color: #6666ff; }
a:visited {
color: #800080; }
a:visited:hover {
color: #BB00BB; }
a[href^="http:"] {
text-decoration: underline; }
a[href^="https:"] {
text-decoration: underline; }

code > span.kw { color: #555; font-weight: bold; } 
code > span.dt { color: #902000; } 
code > span.dv { color: #40a070; } 
code > span.bn { color: #d14; } 
code > span.fl { color: #d14; } 
code > span.ch { color: #d14; } 
code > span.st { color: #d14; } 
code > span.co { color: #888888; font-style: italic; } 
code > span.ot { color: #007020; } 
code > span.al { color: #ff0000; font-weight: bold; } 
code > span.fu { color: #900; font-weight: bold; }  code > span.er { color: #a61717; background-color: #e3d2d2; } 
</style>

</head>

<body>




<h1 class="title toc-ignore">Usage of the Personalized Package</h1>
<h4 class="author"><em>Jared Huling</em></h4>
<h4 class="date"><em>2018-09-14</em></h4>


<div id="TOC">
<ul>
<li><a href="#introduction-to-personalized"><span class="toc-section-number">1</span> Introduction to <code>personalized</code></a><ul>
<li><a href="#choice-of-m-function"><span class="toc-section-number">1.0.1</span> Choice of <span class="math inline">\(M\)</span> function</a></li>
<li><a href="#choice-of-f"><span class="toc-section-number">1.0.2</span> Choice of <span class="math inline">\(f\)</span></a></li>
<li><a href="#variable-selection"><span class="toc-section-number">1.0.3</span> Variable Selection</a></li>
<li><a href="#extension-to-multi-category-treatments"><span class="toc-section-number">1.1</span> Extension to multi-category treatments</a></li>
</ul></li>
<li><a href="#quick-usage-reference"><span class="toc-section-number">2</span> Quick Usage Reference</a><ul>
<li><a href="#creating-and-checking-propensity-score-model"><span class="toc-section-number">2.1</span> Creating and Checking Propensity Score Model</a></li>
<li><a href="#fitting-subgroup-identification-models"><span class="toc-section-number">2.2</span> Fitting Subgroup Identification Models</a></li>
<li><a href="#evaluating-treatment-effects-within-estimated-subgroups"><span class="toc-section-number">2.3</span> Evaluating Treatment Effects within Estimated Subgroups</a></li>
</ul></li>
<li><a href="#user-guide"><span class="toc-section-number">3</span> User Guide</a><ul>
<li><a href="#overview"><span class="toc-section-number">3.1</span> Overview</a></li>
<li><a href="#creating-and-checking-a-propensity-score-model"><span class="toc-section-number">3.2</span> Creating and Checking a propensity Score Model</a><ul>
<li><a href="#observational-studies"><span class="toc-section-number">3.2.1</span> Observational Studies</a></li>
<li><a href="#randomized-controlled-trials"><span class="toc-section-number">3.2.2</span> Randomized Controlled Trials</a></li>
</ul></li>
<li><a href="#fitting-subgroup-identification-models-1"><span class="toc-section-number">3.3</span> Fitting Subgroup Identification Models</a><ul>
<li><a href="#overview-1"><span class="toc-section-number">3.3.1</span> Overview</a></li>
<li><a href="#explanation-of-major-function-arguments"><span class="toc-section-number">3.3.2</span> Explanation of Major Function Arguments</a></li>
<li><a href="#continuous-outcomes"><span class="toc-section-number">3.3.3</span> Continuous Outcomes</a></li>
<li><a href="#binary-outcomes"><span class="toc-section-number">3.3.4</span> Binary Outcomes</a></li>
<li><a href="#count-outcomes"><span class="toc-section-number">3.3.5</span> Count Outcomes</a></li>
<li><a href="#time-to-event-outcomes"><span class="toc-section-number">3.3.6</span> Time-to-event Outcomes</a></li>
<li><a href="#efficiency-augmentation"><span class="toc-section-number">3.3.7</span> Efficiency Augmentation</a></li>
<li><a href="#plotting-fitted-models"><span class="toc-section-number">3.3.8</span> Plotting Fitted Models</a></li>
<li><a href="#comparing-subgroups-from-a-fitted-model"><span class="toc-section-number">3.3.9</span> Comparing Subgroups from a Fitted Model</a></li>
</ul></li>
<li><a href="#validating-subgroup-identification-models"><span class="toc-section-number">3.4</span> Validating Subgroup Identification Models</a><ul>
<li><a href="#overview-2"><span class="toc-section-number">3.4.1</span> Overview</a></li>
<li><a href="#repeated-trainingtest-splitting"><span class="toc-section-number">3.4.2</span> Repeated Training/Test Splitting</a></li>
<li><a href="#bootstrap-bias-correction"><span class="toc-section-number">3.4.3</span> Bootstrap Bias Correction</a></li>
<li><a href="#plotting-validated-models"><span class="toc-section-number">3.4.4</span> Plotting Validated Models</a></li>
</ul></li>
</ul></li>
</ul>
</div>

<div id="introduction-to-personalized" class="section level1">
<h1><span class="header-section-number">1</span> Introduction to <code>personalized</code></h1>
<p>The <code>personalized</code> package aims to provide an entire analysis pipeline that encompasses a broad class of statistical methods for subgroup identification / personalized medicine.</p>
<p>The general analysis pipeline is as follows:</p>
<ol style="list-style-type: decimal">
<li>Construct propensity score function and check propensity score diagnostics</li>
<li>Choose and fit a subgroup identification model</li>
<li>Estimate the resulting treatment effects among estimated subgroups</li>
<li>Visualize and examine model and subgroup treatment effects</li>
</ol>
<p>The available subgroup identification models are those under the purview of the general subgroup identification framework proposed by Chen, et al. (2017). In this section we will give a brief summary of this framework and what elements of it are available in the <code>personalized</code> package.</p>
<p>In general we are interested in understanding the impact of a treatment on an outcome and in particular determining if and how different patients respond differently to a treatment in terms of their expected outcome. Assume the outcome we observe <span class="math inline">\(Y\)</span> is such that larger values are preferable. In addition to the outcome, we also observe patient covariate information <span class="math inline">\(X \in \mathbb{R}^p\)</span> and the treatment status <span class="math inline">\(T \in \{-1,1\}\)</span>, where <span class="math inline">\(T = 1\)</span> indicates that a patient received the treatment, and <span class="math inline">\(T = -1\)</span> indicates a patient received the control. For the purposes of this package, we consider an unspecified form for the expected outcome conditional on the covariate and treatment status information: <span class="math display">\[E(Y|T, X) = g(X) + T\Delta(X)/2,\]</span> where <span class="math inline">\(\Delta(X) \equiv E(Y|T=1, X) - E(Y|T=-1, X)\)</span> is of primary interest and <span class="math inline">\(g(X) \equiv \frac{1}{2}\{E(Y|T=1, X) + E(Y|T=-1, X) \}\)</span> represents covariate main effects. Here, <span class="math inline">\(\Delta(X)\)</span> represents the interaction between treatment and covariates and thus drives heterogeneity of main effect. The purpose of the  package is in estimation of <span class="math inline">\(\Delta(X)\)</span> or monotone transformations of <span class="math inline">\(\Delta(X)\)</span> which can be used to stratify the population into subgroups (e.g. a subgroup of patients who benefit from the treatment and a subgroup who does not benefit).</p>
<p>We call the term <span class="math inline">\(\Delta(X)\)</span> a benefit score, as it reflects how much a patient is expected to benefit from a treatment in terms of their outcome. For a patient with <span class="math inline">\(X = x\)</span>, if <span class="math inline">\(\Delta(x) &gt; 0\)</span> (assuming larger outcomes are better), the treatment is beneficial in terms of the expected outcome, and if <span class="math inline">\(\Delta(X) \leq 0\)</span>, the control is better than the treatment. Hence to identify which subgroup of patients benefits from a treatment, we seek to estimate <span class="math inline">\(\Delta(X)\)</span>.</p>
<p>In the framework of Chen, et al. (2017), there are two main methods for estimating subgroups. The first is called the weighting method. The weighting method estimates <span class="math inline">\(\Delta(X)\)</span> (or monotone transformations of it) by minimizing the following objective function with respect to <span class="math inline">\(f(X)\)</span>: <span class="math display">\[L_W(f) = \frac{1}{n}\sum_{i = 1}^n\frac{M(Y_i, T_i\times f(x_i)) }{ {T_i\pi(x_i)+(1-T_i)/2} },\]</span> where <span class="math inline">\(\pi(x) = Pr(T = 1|X = x)\)</span> is the propensity score function. Here, <span class="math inline">\(\hat{f}\)</span> is our estimated benefit score. Hence <span class="math inline">\(\hat{f} = \mbox{argmin}_f L_W(f)\)</span> is our estimate of <span class="math inline">\(\Delta(X)\)</span>. If we want a simple functional form for the estimate <span class="math inline">\(\hat{f}\)</span>, we can restrict the form of <span class="math inline">\(f\)</span> such that it is a linear combination of the covariates, i.e. <span class="math inline">\(f(X) = X^T\beta\)</span>. Hence <span class="math inline">\(\hat{f}(X) = X^T\hat{\beta}\)</span>.</p>
<p>The A-learning estimator is the minimizer of <span class="math display">\[L_A(f) = \frac{1}{n}\sum_{i = 1}^n M(Y_i, {\{(T_i+1)/2 -\pi(x_i)\} } {\times f(x_i))}.\]</span></p>
<div id="choice-of-m-function" class="section level3">
<h3><span class="header-section-number">1.0.1</span> Choice of <span class="math inline">\(M\)</span> function</h3>
<p>The <code>personalized</code> package offers a flexible range of choices both for the form of <span class="math inline">\(f(X)\)</span> and also for the loss function <span class="math inline">\(M(y, v)\)</span>. Most choices of <span class="math inline">\(f\)</span> and <span class="math inline">\(M\)</span> can be used for either the weighting method or for the A-learning method. In this package, we limit the use of <span class="math inline">\(M\)</span> to natural choices corresponding to the type of outcome. The squared error loss <span class="math inline">\(M(y, v) = (v - y) ^ 2\)</span> corresponds to continuous responses but can also be used for binary outcomes, however the logistic loss <span class="math inline">\(M(y, v) = y \cdot log(1 + \exp\{-v\})\)</span> corresponds to binary outcomes and the loss associated with the negative partial likelihood of the Cox proportional hazards model corresponds to time-to-event outcomes.</p>
<table>
<colgroup>
<col width="31%"></col>
<col width="31%"></col>
<col width="36%"></col>
</colgroup>
<thead>
<tr class="header">
<th>Name</th>
<th>Outcomes</th>
<th align="left">Loss</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Squared Error</td>
<td>C/B/CT</td>
<td align="left"><span class="math inline">\(M(y, v) = (v - y) ^ 2\)</span></td>
</tr>
<tr class="even">
<td>OWL Logistic</td>
<td>C/B/CT</td>
<td align="left"><span class="math inline">\(M(y, v) = y\log(1 + \exp\{-v\})\)</span></td>
</tr>
<tr class="odd">
<td>OWL Logistic Flip</td>
<td>C/B/CT</td>
<td align="left"><span class="math inline">\(M(y, v) = \vert y\vert \log(1 + \exp\{-\mbox{sign}(y)v\})\)</span></td>
</tr>
<tr class="even">
<td>OWL Hinge</td>
<td>C/B/CT</td>
<td align="left"><span class="math inline">\(M(y, v) = y\max(0, 1 - v)\)</span></td>
</tr>
<tr class="odd">
<td>OWL Hinge Flip</td>
<td>C/B/CT</td>
<td align="left"><span class="math inline">\(M(y, v) = \vert y\vert\max(0, 1 - \mbox{sign}(y)v)\)</span></td>
</tr>
<tr class="even">
<td>Logistic</td>
<td>B</td>
<td align="left"><span class="math inline">\(M(y, v) = -[yv - \log(1 + \mbox{exp}\{-v\})]\)</span></td>
</tr>
<tr class="odd">
<td>Poisson</td>
<td>CT</td>
<td align="left"><span class="math inline">\(M(y, v) = -[yv - \exp(v)]\)</span></td>
</tr>
<tr class="even">
<td>Cox</td>
<td>TTE</td>
<td align="left"><span class="math inline">\(M(y, v) = -\left\{ \int_0^\tau\left( v - \log[E\{ e^vI(X \geq u) \}] \right)\mathrm{d} N(u) \right\}\)</span></td>
</tr>
</tbody>
</table>
<p>where “C” indicates usage for continuous outcomes, “B” indicates usage for binary outcomes, “CT” indicates usages for count outcomes, and “TTE” indicates usages for time-to-event outcomes, and for the last loss <span class="math inline">\(y = (X, \delta) = \{ \widetilde{X} \wedge C, I(\widetilde{X} \leq t) \}\)</span>, <span class="math inline">\(\widetilde{X}\)</span> is the survival time, &amp; <span class="math inline">\(C\)</span> is the censoring time, <span class="math inline">\(N(t) = I(\widetilde{X} \leq t)\delta\)</span>, and <span class="math inline">\(\tau\)</span> &amp; is fixed time point where <span class="math inline">\(P(X \geq \tau) &gt; 0\)</span>.</p>
</div>
<div id="choice-of-f" class="section level3">
<h3><span class="header-section-number">1.0.2</span> Choice of <span class="math inline">\(f\)</span></h3>
<p>The choices of <span class="math inline">\(f\)</span> offered in the <code>personalized</code> package are varied. A familiar, interpretable choice of <span class="math inline">\(f(X)\)</span> is <span class="math inline">\(X^T\beta\)</span>. Also offered is an additive model, i.e. <span class="math inline">\(f(X) = \sum_{j = 1}^pf_j(X_j)\)</span>; this option is accessed through use of the <code>mgcv</code> package, which provides estimation procedures for generalized additive models (GAMs). Another flexible, but less interpretable choice offered here is related to gradient boosted decision trees, which model <span class="math inline">\(f\)</span> as <span class="math inline">\(f(X) = \sum_{k = 1}^Kf_k(X)\)</span>, where each <span class="math inline">\(f_k\)</span> is a decision tree model.</p>
</div>
<div id="variable-selection" class="section level3">
<h3><span class="header-section-number">1.0.3</span> Variable Selection</h3>
<p>For subgroup identification models with <span class="math inline">\(f(X) = X^T\beta\)</span>, the <code>personalized</code> package also allows for variable selection. Instead of minimizing <span class="math inline">\(L_W(f)\)</span> or <span class="math inline">\(L_A(f)\)</span>, we instead minimize a penalized version: <span class="math inline">\(L_W(f) + \lambda||\beta||_1\)</span> or <span class="math inline">\(L_A(f) + \lambda||\beta||_1\)</span>.</p>
</div>
<div id="extension-to-multi-category-treatments" class="section level2">
<h2><span class="header-section-number">1.1</span> Extension to multi-category treatments</h2>
<p>Often, multiple treatment options are available for patients instead of one treatment option and a control and the researcher may wish to understand which of all treatment options are the best for which patients. Extending the above methodology to multi-category treatment results in added complications, and in particular there is no straightforward extension of the A-learning method for multiple treatment settings. In the supplementary material of , the weighting method was extended to estimate a benefit score corresponding to each level of a treatment subject to a sum-to-zero constraint for identifiability. <!-- We assume that the expected outcome conditional on the covariate and treatment status information can be represented by  --> <!-- \begin{equation}\label{eqn:outcome_model_mult_trt} --> <!-- E(Y|\bfX, T) = g(\bfX) + \sum_{k = 1}^{K} 1(T = k)\Delta_k(\bfX), --> <!-- \end{equation} --> <!-- where $\sum_{k = 1}^{K} \Delta_k(\bfX) = 0$.   --> In particular, we are interested in estimating (the sign) of <span class="math display">\[\begin{eqnarray}
\Delta_{kl}(x) \equiv 
\{ E(Y | T = k, { X} = { x}) - E(Y | T = l, X = { x}) \} \label{definition of Delta_kl}
\end{eqnarray}\]</span> If <span class="math inline">\(\Delta_{kl}(x) &gt; 0\)</span>, then treatment <span class="math inline">\(k\)</span> is preferable to treatment <span class="math inline">\(l\)</span> for a patient with <span class="math inline">\(X = x\)</span>. For each patient, evaluation of all pairwise comparisons of the <span class="math inline">\(\Delta_{kl}(x)\)</span> indicates which treatment leads to the largest expected outcome. The weighting estimators of the benefit scores are the minimizers of the following loss function: <span class="math display">\[\begin{equation} \label{eqn:weighting_mult}
L_W(f_1, \dots, f_{K}) = \frac{1}{n}\sum_{i = 1}^n\frac{\boldsymbol M(Y_i,  \sum_{k = 1}^{K}I(T_{i} = k)\times f_k(x_i) ) }{ { Pr(T = T_i | X = x_i)} },
\end{equation}\]</span> subject to <span class="math inline">\(\sum_{k = 1}^{K}f_k(x_i) = 0\)</span>. Clearly when <span class="math inline">\(K = 2\)</span>, this loss function is equivalent to ().</p>
<p>Estimation of the benefit scores in this model is still challenging without added modeling assumptions, as enforcing <span class="math inline">\(\sum_{k = 1}^{K}f_k(x_i) = 0\)</span> may not always be feasible using existing estimation routines. However, if each <span class="math inline">\(\Delta_{kl}(X)\)</span> has a linear form, i.e. <span class="math inline">\(\Delta_{kl}(X) = X^\top\boldsymbol \beta_k\)</span> where <span class="math inline">\(l\)</span> represents a reference treatment group, estimation can then easily be fit into the same computational framework as for the simpler two treatment case by constructing an appropriate design matrix. Thus, for multiple treatments the  package is restricted to linear estimators of the benefit scores. For instructive purposes, consider a scenario with three treatment options, <span class="math inline">\(A\)</span>, <span class="math inline">\(B\)</span>, and <span class="math inline">\(C\)</span>. Let <span class="math inline">\(\boldsymbol X = ({\boldsymbol X}_A^\top, {\boldsymbol X}_B^\top, {\boldsymbol X}_C^\top )^\top\)</span> be the design matrix for all patients, where each <span class="math inline">\({\boldsymbol X}_k^\top\)</span> is the sub-design matrix of patients who received treatment <span class="math inline">\(k\)</span>. Under <span class="math inline">\(\Delta_{kl}(X) = X^\top\boldsymbol \beta_k\)</span> with <span class="math inline">\(l\)</span> as the reference treatment, we can construct a new design matrix which can then be provided to existing estimation routines in order to minimize (). With treatment <span class="math inline">\(C\)</span> as the reference treatment, the design matrix is constructed as <span class="math display">\[
\widetilde{{\boldsymbol X}} = \mbox{diag}(\boldsymbol J)\begin{pmatrix}
{\boldsymbol X}_A &amp; \boldsymbol 0 \\
\boldsymbol 0 &amp; {\boldsymbol X}_B \\
{\boldsymbol X}_C &amp; {\boldsymbol X}_C
\end{pmatrix},
\]</span> where the <span class="math inline">\(i\)</span>th element of <span class="math inline">\(\boldsymbol J\)</span> is <span class="math inline">\(2I(T_i \neq C) - 1\)</span> and the weight vector <span class="math inline">\(\boldsymbol W\)</span> is constructed with the <span class="math inline">\(i\)</span>th element set to <span class="math inline">\(1 / Pr(T = T_i | X = {x}_i)\)</span>. Furthermore denote <span class="math inline">\(\widetilde{\boldsymbol \beta} = (\boldsymbol \beta_A^\top, \boldsymbol \beta_B^\top)^\top\)</span>. Hence <span class="math inline">\(\widetilde{{\boldsymbol X}}^\top\widetilde{\boldsymbol \beta} = {\boldsymbol X}_A^\top\boldsymbol \beta_A + {\boldsymbol X}_B^\top\boldsymbol \beta_B - {\boldsymbol X}_C^\top(\boldsymbol \beta_A + \boldsymbol \beta_B)\)</span>, and thus the sum-to-zero constraints on the benefit scores hold by construction.</p>
</div>
</div>
<div id="quick-usage-reference" class="section level1">
<h1><span class="header-section-number">2</span> Quick Usage Reference</h1>
<p>First simulate some data where we know the truth. In this simulation, the treatment assignment depends on covariates and hence we must model the propensity score <span class="math inline">\(\pi(x) = Pr(T = 1 | X = x)\)</span>. In this simulation we will assume that larger values of the outcome are better.</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb1-1" data-line-number="1"><span class="kw">library</span>(personalized)</a>
<a class="sourceLine" id="cb1-2" data-line-number="2"></a>
<a class="sourceLine" id="cb1-3" data-line-number="3"><span class="kw">set.seed</span>(<span class="dv">123</span>)</a>
<a class="sourceLine" id="cb1-4" data-line-number="4">n.obs  &lt;-<span class="st"> </span><span class="dv">1000</span></a>
<a class="sourceLine" id="cb1-5" data-line-number="5">n.vars &lt;-<span class="st"> </span><span class="dv">50</span></a>
<a class="sourceLine" id="cb1-6" data-line-number="6">x &lt;-<span class="st"> </span><span class="kw">matrix</span>(<span class="kw">rnorm</span>(n.obs <span class="op">*</span><span class="st"> </span>n.vars, <span class="dt">sd =</span> <span class="dv">3</span>), n.obs, n.vars)</a>
<a class="sourceLine" id="cb1-7" data-line-number="7"></a>
<a class="sourceLine" id="cb1-8" data-line-number="8"><span class="co"># simulate non-randomized treatment</span></a>
<a class="sourceLine" id="cb1-9" data-line-number="9">xbetat   &lt;-<span class="st"> </span><span class="fl">0.5</span> <span class="op">+</span><span class="st"> </span><span class="fl">0.25</span> <span class="op">*</span><span class="st"> </span>x[,<span class="dv">21</span>] <span class="op">-</span><span class="st"> </span><span class="fl">0.25</span> <span class="op">*</span><span class="st"> </span>x[,<span class="dv">41</span>]</a>
<a class="sourceLine" id="cb1-10" data-line-number="10">trt.prob &lt;-<span class="st"> </span><span class="kw">exp</span>(xbetat) <span class="op">/</span><span class="st"> </span>(<span class="dv">1</span> <span class="op">+</span><span class="st"> </span><span class="kw">exp</span>(xbetat))</a>
<a class="sourceLine" id="cb1-11" data-line-number="11">trt      &lt;-<span class="st"> </span><span class="kw">rbinom</span>(n.obs, <span class="dv">1</span>, <span class="dt">prob =</span> trt.prob)</a>
<a class="sourceLine" id="cb1-12" data-line-number="12"></a>
<a class="sourceLine" id="cb1-13" data-line-number="13"><span class="co"># simulate delta</span></a>
<a class="sourceLine" id="cb1-14" data-line-number="14">delta &lt;-<span class="st"> </span>(<span class="fl">0.5</span> <span class="op">+</span><span class="st"> </span>x[,<span class="dv">2</span>] <span class="op">-</span><span class="st"> </span><span class="fl">0.5</span> <span class="op">*</span><span class="st"> </span>x[,<span class="dv">3</span>] <span class="op">-</span><span class="st"> </span><span class="dv">1</span> <span class="op">*</span><span class="st"> </span>x[,<span class="dv">11</span>] <span class="op">+</span><span class="st"> </span><span class="dv">1</span> <span class="op">*</span><span class="st"> </span>x[,<span class="dv">1</span>] <span class="op">*</span><span class="st"> </span>x[,<span class="dv">12</span>] )</a>
<a class="sourceLine" id="cb1-15" data-line-number="15"></a>
<a class="sourceLine" id="cb1-16" data-line-number="16"><span class="co"># simulate main effects g(X)</span></a>
<a class="sourceLine" id="cb1-17" data-line-number="17">xbeta &lt;-<span class="st"> </span>x[,<span class="dv">1</span>] <span class="op">+</span><span class="st"> </span>x[,<span class="dv">11</span>] <span class="op">-</span><span class="st"> </span><span class="dv">2</span> <span class="op">*</span><span class="st"> </span>x[,<span class="dv">12</span>]<span class="op">^</span><span class="dv">2</span> <span class="op">+</span><span class="st"> </span>x[,<span class="dv">13</span>] <span class="op">+</span><span class="st"> </span><span class="fl">0.5</span> <span class="op">*</span><span class="st"> </span>x[,<span class="dv">15</span>] <span class="op">^</span><span class="st"> </span><span class="dv">2</span></a>
<a class="sourceLine" id="cb1-18" data-line-number="18">xbeta &lt;-<span class="st"> </span>xbeta <span class="op">+</span><span class="st"> </span>delta <span class="op">*</span><span class="st"> </span>(<span class="dv">2</span> <span class="op">*</span><span class="st"> </span>trt <span class="op">-</span><span class="st"> </span><span class="dv">1</span>)</a>
<a class="sourceLine" id="cb1-19" data-line-number="19"></a>
<a class="sourceLine" id="cb1-20" data-line-number="20"><span class="co"># simulate continuous outcomes</span></a>
<a class="sourceLine" id="cb1-21" data-line-number="21">y &lt;-<span class="st"> </span><span class="kw">drop</span>(xbeta) <span class="op">+</span><span class="st"> </span><span class="kw">rnorm</span>(n.obs)</a></code></pre></div>
<div id="creating-and-checking-propensity-score-model" class="section level2">
<h2><span class="header-section-number">2.1</span> Creating and Checking Propensity Score Model</h2>
<p>The first step in our analysis is to construct a model for the propensity score. In the <code>personalized</code> package, we need to wrap this model in a function which inputs covariate values and the treatment statuses and outputs a propensity score between 0 and 1. Since there are many covariates, we use the lasso to select variables in our propensity score model:</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb2-1" data-line-number="1"><span class="co"># create function for fitting propensity score model</span></a>
<a class="sourceLine" id="cb2-2" data-line-number="2">prop.func &lt;-<span class="st"> </span><span class="cf">function</span>(x, trt)</a>
<a class="sourceLine" id="cb2-3" data-line-number="3">{</a>
<a class="sourceLine" id="cb2-4" data-line-number="4"> <span class="co"># fit propensity score model</span></a>
<a class="sourceLine" id="cb2-5" data-line-number="5"> propens.model &lt;-<span class="st"> </span><span class="kw">cv.glmnet</span>(<span class="dt">y =</span> trt,</a>
<a class="sourceLine" id="cb2-6" data-line-number="6">                            <span class="dt">x =</span> x, </a>
<a class="sourceLine" id="cb2-7" data-line-number="7">                            <span class="dt">family =</span> <span class="st">&quot;binomial&quot;</span>)</a>
<a class="sourceLine" id="cb2-8" data-line-number="8"> pi.x &lt;-<span class="st"> </span><span class="kw">predict</span>(propens.model, <span class="dt">s =</span> <span class="st">&quot;lambda.min&quot;</span>,</a>
<a class="sourceLine" id="cb2-9" data-line-number="9">                 <span class="dt">newx =</span> x, <span class="dt">type =</span> <span class="st">&quot;response&quot;</span>)[,<span class="dv">1</span>]</a>
<a class="sourceLine" id="cb2-10" data-line-number="10"> pi.x</a>
<a class="sourceLine" id="cb2-11" data-line-number="11">}</a></code></pre></div>
<p>We then need to make sure the propensity scores have sufficient overlap between treatment groups. We can do this with the <code>check.overlap()</code> function, which plots densities or histograms of the propensity scores for each of the treatment groups:</p>
<div class="sourceCode" id="cb3"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb3-1" data-line-number="1"><span class="kw">check.overlap</span>(x, trt, prop.func)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<p>We can see that the propensity scores mostly have common support except a small region near 0 where there are no propensity scores for the treatment arm.</p>
</div>
<div id="fitting-subgroup-identification-models" class="section level2">
<h2><span class="header-section-number">2.2</span> Fitting Subgroup Identification Models</h2>
<p>The next step is to choose and fit a subgroup identification model. In this example, the outcome is continuous, so we choose the squared error loss function. We also choose the model type (either the weighting or the A-learning method). The main function for fitting subgroup identification models is <code>fit.subgroup</code>. Since there are many covariates, we choose a loss function with a lasso penalty to select variables. The underlying fitting function here is <code>cv.glmnet()</code>. We can pass to <code>fit.subgroup()</code> arguments of the <code>cv.glmnet()</code> function, such as <code>nfolds</code> for the number of cross validation folds.</p>
<div class="sourceCode" id="cb4"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb4-1" data-line-number="1">subgrp.model &lt;-<span class="st"> </span><span class="kw">fit.subgroup</span>(<span class="dt">x =</span> x, <span class="dt">y =</span> y,</a>
<a class="sourceLine" id="cb4-2" data-line-number="2">                             <span class="dt">trt =</span> trt,</a>
<a class="sourceLine" id="cb4-3" data-line-number="3">                             <span class="dt">propensity.func =</span> prop.func,</a>
<a class="sourceLine" id="cb4-4" data-line-number="4">                             <span class="dt">loss   =</span> <span class="st">&quot;sq_loss_lasso&quot;</span>,</a>
<a class="sourceLine" id="cb4-5" data-line-number="5">                             <span class="dt">nfolds =</span> <span class="dv">10</span>)              <span class="co"># option for cv.glmnet</span></a>
<a class="sourceLine" id="cb4-6" data-line-number="6"></a>
<a class="sourceLine" id="cb4-7" data-line-number="7"><span class="kw">summary</span>(subgrp.model)</a></code></pre></div>
<pre><code>## family:    gaussian 
## loss:      sq_loss_lasso 
## method:    weighting 
## cutpoint:  0 
## propensity 
## function:  propensity.func 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Outcomes:
##                 Recommended 0      Recommended 1
## Received 0  -9.9055 (n = 190) -16.5132 (n = 220)
## Received 1 -18.5664 (n = 272)  -7.6255 (n = 318)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##                           8.6609 (n = 462) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##                           8.8877 (n = 538) 
## 
## NOTE: The above average outcomes are biased estimates of
##       the expected outcomes conditional on subgroups. 
##       Use 'validate.subgroup()' to obtain unbiased estimates.
## 
## ---------------------------------------------------
## 
## Benefit score quantiles (f(X) for 1 vs 0): 
##       0%      25%      50%      75%     100% 
## -10.9672  -1.8897   0.3569   2.2806  10.0196 
## 
## ---------------------------------------------------
## 
## Summary of individual treatment effects: 
## E[Y|T=1, X] - E[Y|T=0, X]
## 
##     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
## -21.9345  -3.7794   0.7138   0.4061   4.5611  20.0392 
## 
## ---------------------------------------------------
## 
## 8 out of 50 interactions selected in total by the lasso (cross validation criterion).
## 
## The first estimate is the treatment main effect, which is always selected. 
## Any other variables selected represent treatment-covariate interactions.
## 
##            Trt1     V1     V2     V3     V6     V11    V13    V17     V37
## Estimate 0.2231 0.0577 0.6558 -0.455 -0.107 -0.4036 0.3108 0.2491 -0.1768</code></pre>
<p>We can then plot the outcomes of patients in the different subgroups:</p>
<div class="sourceCode" id="cb6"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb6-1" data-line-number="1"><span class="kw">plot</span>(subgrp.model)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<p>Alternatively, we can create an interaction plot. This plot represents the average outcome within each subgroup broken down by treatment status. If the lines in the interaction plots cross, that indicates there is a subgroup treatment effect.</p>
<div class="sourceCode" id="cb7"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb7-1" data-line-number="1"><span class="kw">plot</span>(subgrp.model, <span class="dt">type =</span> <span class="st">&quot;interaction&quot;</span>)</a></code></pre></div>
<p><img src="" /><!-- --></p>
</div>
<div id="evaluating-treatment-effects-within-estimated-subgroups" class="section level2">
<h2><span class="header-section-number">2.3</span> Evaluating Treatment Effects within Estimated Subgroups</h2>
<p>Unfortunately, if we simply look at the average outcome within each subgroup, this will give us a biased estimate of the treatment effects within each subgroup as we have already used the data to estimate the subgroups. Instead, to get a valid estimate of the subgroup treatment effects we can use a bootstrap approach to correcting for this bias. We can alternatively repeatedly partition our data into training and testing samples. In this procedure for each replication we fit a subgroup model using the training data and then evaluate the subgroup treatment effects on the testing data. The argument <code>B</code> specifies the number of replications and the argument <code>train.fraction</code> specifies what proportion of samples are for training in the training and testing partitioning method.</p>
<p>Both of these approaches can be carried out using the <code>validate.subgroup()</code> function.</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb8-1" data-line-number="1">validation &lt;-<span class="st"> </span><span class="kw">validate.subgroup</span>(subgrp.model, </a>
<a class="sourceLine" id="cb8-2" data-line-number="2">                                <span class="dt">B =</span> 25L,  <span class="co"># specify the number of replications</span></a>
<a class="sourceLine" id="cb8-3" data-line-number="3">                                <span class="dt">method =</span> <span class="st">&quot;training_test_replication&quot;</span>,</a>
<a class="sourceLine" id="cb8-4" data-line-number="4">                                <span class="dt">train.fraction =</span> <span class="fl">0.75</span>)</a>
<a class="sourceLine" id="cb8-5" data-line-number="5"></a>
<a class="sourceLine" id="cb8-6" data-line-number="6">validation</a></code></pre></div>
<pre><code>## family:  gaussian 
## loss:    sq_loss_lasso 
## method:  weighting 
## 
## validation method:  training_test_replication 
## cutpoint:           0 
## replications:       25 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Test Set Outcomes:
##                                Recommended 0
## Received 0 -12.4224 (SE = 4.1278, n = 49.92)
## Received 1 -15.4275 (SE = 2.7319, n = 74.44)
##                              Recommended 1
## Received 0 -14.5556 (SE = 2.58, n = 50.44)
## Received 1 -9.3676 (SE = 3.4986, n = 75.2)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##            3.005 (SE = 4.6473, n = 124.36) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##           5.1881 (SE = 4.5859, n = 125.64) 
## 
## Est of 
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:                    
## 3.442 (SE = 3.3063)</code></pre>
<p>We can then plot the average outcomes averaged over all replications of the training and testing partition procedure:</p>
<div class="sourceCode" id="cb10"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb10-1" data-line-number="1"><span class="kw">plot</span>(validation)</a></code></pre></div>
<p><img src="" /><!-- --> From the above plot we can evaluate what the impact of the subgroups is. Among patients for whom the model recommends the control is more effective than the treatment, we can see that those who instead take the treatment are worse off than patients who take the control. Similarly, among patients who are recommended the treatment, patients who take the treatment are better off on average than patients who do not take the treatment.</p>
<p>Similarly, we can create an interaction plot of either the bootstrap bias-corrected means within the different subgroups or the average test set means within subgroups. Here, lines crossing is an indicator of differential treatment effect between the subgroups.</p>
<div class="sourceCode" id="cb11"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb11-1" data-line-number="1"><span class="kw">plot</span>(validation, <span class="dt">type =</span> <span class="st">&quot;interaction&quot;</span>)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<p>We can also compare the validation results with the results on the observed data:</p>
<div class="sourceCode" id="cb12"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb12-1" data-line-number="1"><span class="kw">plotCompare</span>(subgrp.model, validation, <span class="dt">type =</span> <span class="st">&quot;interaction&quot;</span>)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<p>Note that the estimated treatment effects within subgroups are attenuated for the validated results. It is common for the estimated treatment effects within subgroups to be overly-optimistic based on the training data.</p>
</div>
</div>
<div id="user-guide" class="section level1">
<h1><span class="header-section-number">3</span> User Guide</h1>
<div id="overview" class="section level2">
<h2><span class="header-section-number">3.1</span> Overview</h2>
<p>In this user guide we will provide more detailed information about the entire subgroup identification modeling process in the <code>personalized</code> package. Specifically, we will explore more thoroughly the four steps outlined in the introduction section.</p>
</div>
<div id="creating-and-checking-a-propensity-score-model" class="section level2">
<h2><span class="header-section-number">3.2</span> Creating and Checking a propensity Score Model</h2>
<p>The propensity score, <span class="math inline">\(\pi(x) = Pr(T = 1 | X = x)\)</span> is a crucial component of the subgroup identification models in the <code>personalized</code> package, especially for the analysis of data that comes from an observational study.</p>
<div id="observational-studies" class="section level3">
<h3><span class="header-section-number">3.2.1</span> Observational Studies</h3>
<p>For data from observational studies, the user must construct a model for the propensity score. Typically this is done usine a logistic regression model with</p>
<p><span class="math display">\[ \mbox{logit}(\pi(X)) = \mbox{logit} Pr(T = 1 | X) = X^T\beta.\]</span> When this model is not appropriate, users may use a more flexible model, or utilize variable selection techniques if there are a large number of covariates. More details on how this is implemented are documented within the <code>fit.subgroup()</code> documentation below.</p>
</div>
<div id="randomized-controlled-trials" class="section level3">
<h3><span class="header-section-number">3.2.2</span> Randomized Controlled Trials</h3>
<p>For data from RCTs, the users can simply use a constant function for the propensity score. For example, if patients were equally randomized to the treatment and control groups, we know that <span class="math inline">\(\pi(x) = 1/2\)</span>.</p>
</div>
</div>
<div id="fitting-subgroup-identification-models-1" class="section level2">
<h2><span class="header-section-number">3.3</span> Fitting Subgroup Identification Models</h2>
<div id="overview-1" class="section level3">
<h3><span class="header-section-number">3.3.1</span> Overview</h3>
<p>The core component of the <code>personalized</code> package is in fitting subgroup identification models with the <code>fit.subgroup()</code> function. This function provides fitting capabilities for many different outcomes, choices of loss function, choice of underlying model for <span class="math inline">\(\Delta(X)\)</span>, and model class (either the weighting method or A-learning).</p>
</div>
<div id="explanation-of-major-function-arguments" class="section level3">
<h3><span class="header-section-number">3.3.2</span> Explanation of Major Function Arguments</h3>
<div id="x" class="section level4">
<h4><span class="header-section-number">3.3.2.1</span> <code>x</code></h4>
<p>The argument <code>x</code> is for the design matrix. Each column of <code>x</code> corresponds to a variable to be used in the model for <span class="math inline">\(\Delta(X)\)</span> and each row of <code>x</code> corresponds to an observation. Every variable in <code>x</code> will be used for the subgroup identification model (however some variables may be removed if a variable selection procedure is specified for <code>loss</code>).</p>
</div>
<div id="y" class="section level4">
<h4><span class="header-section-number">3.3.2.2</span> <code>y</code></h4>
<p>The argument <code>y</code> is for the response vector. Each element in <code>y</code> is a patient observation. In the case of time-to-event outcomes <code>y</code> should be specified as a <code>Surv</code> object. For example the user should specify <code>y = Surv(time, status)</code>, where <code>time</code> is the observed time and <code>status</code> is an indicator that the observed time is the survival time.</p>
</div>
<div id="trt" class="section level4">
<h4><span class="header-section-number">3.3.2.3</span> <code>trt</code></h4>
<p>The argument <code>trt</code> corresponds to the vector of observed treatment statuses. Each element in <code>trt</code> shoulld be either the integer 1 or the integer 0, where 1 in the <span class="math inline">\(i\)</span>th position means means patient <span class="math inline">\(i\)</span> received the treatment and 0 in the <span class="math inline">\(i\)</span>th position indicates patient <span class="math inline">\(i\)</span> did not receive treatment.</p>
</div>
<div id="propensity.func" class="section level4">
<h4><span class="header-section-number">3.3.2.4</span> <code>propensity.func</code></h4>
<p>The argument <code>propensity.func</code> corresponds to a function which returns a propensity score. While it seems cumbersome to have to specify a function instead of a vector of probabilities, it is crucial for later validation for the propensity scores to be re-estimated using the resampled or sampled data (this will be explained further in the section below for the <code>validate.subgroup()</code> function). The user should specify a function which inputs two arguments: <code>trt</code> and <code>x</code>, where <code>trt</code> corresponds to the <code>trt</code> argument for the <code>fit.subgroup()</code> function and <code>x</code> corresponds to the <code>x</code> argument for the <code>fit.subgroup()</code> function. The function supplied to <code>propensity.func</code> should contain code that uses <code>x</code> and <code>trt</code> to fit a propensity score model and then return an estimated propensity score for each observation in <code>x</code>. A basic example which uses ` logistic regression model to estimate the propensity score is the following:</p>
<div class="sourceCode" id="cb13"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb13-1" data-line-number="1">propensity.func &lt;-<span class="st"> </span><span class="cf">function</span>(x, trt)</a>
<a class="sourceLine" id="cb13-2" data-line-number="2">{</a>
<a class="sourceLine" id="cb13-3" data-line-number="3">    <span class="co"># save data in a data.frame</span></a>
<a class="sourceLine" id="cb13-4" data-line-number="4">    data.fr &lt;-<span class="st"> </span><span class="kw">data.frame</span>(<span class="dt">trt =</span> trt, x)</a>
<a class="sourceLine" id="cb13-5" data-line-number="5">    </a>
<a class="sourceLine" id="cb13-6" data-line-number="6">    <span class="co"># fit propensity score model</span></a>
<a class="sourceLine" id="cb13-7" data-line-number="7">    propensity.model &lt;-<span class="st"> </span><span class="kw">glm</span>(trt <span class="op">~</span><span class="st"> </span>., <span class="dt">family =</span> <span class="kw">binomial</span>(), <span class="dt">data =</span> data.fr)</a>
<a class="sourceLine" id="cb13-8" data-line-number="8">    </a>
<a class="sourceLine" id="cb13-9" data-line-number="9">    <span class="co"># create estimated probabilities</span></a>
<a class="sourceLine" id="cb13-10" data-line-number="10">    pi.x &lt;-<span class="st"> </span><span class="kw">predict</span>(propensity.model, <span class="dt">type =</span> <span class="st">&quot;response&quot;</span>)</a>
<a class="sourceLine" id="cb13-11" data-line-number="11">    <span class="kw">return</span>(pi.x)</a>
<a class="sourceLine" id="cb13-12" data-line-number="12">}</a>
<a class="sourceLine" id="cb13-13" data-line-number="13"></a>
<a class="sourceLine" id="cb13-14" data-line-number="14"><span class="kw">propensity.func</span>(x, trt)[<span class="dv">101</span><span class="op">:</span><span class="dv">105</span>]</a></code></pre></div>
<pre><code>##       101       102       103       104       105 
## 0.2251357 0.2786683 0.9021204 0.4400091 0.8250830</code></pre>
<div class="sourceCode" id="cb15"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb15-1" data-line-number="1">trt[<span class="dv">101</span><span class="op">:</span><span class="dv">105</span>]</a></code></pre></div>
<pre><code>## [1] 0 0 1 1 1</code></pre>
<p>For randomized controlled trials with equal probability of assignment to treatment and control, the user can simply define <code>propensity.func</code> as:</p>
<div class="sourceCode" id="cb17"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb17-1" data-line-number="1">propensity.func &lt;-<span class="st"> </span><span class="cf">function</span>(x, trt) <span class="fl">0.5</span></a></code></pre></div>
<p>which always returns the constant <span class="math inline">\(1/2\)</span>.</p>
</div>
<div id="loss" class="section level4">
<h4><span class="header-section-number">3.3.2.5</span> <code>loss</code></h4>
<p>The <code>loss</code> argument specifies the combination of <span class="math inline">\(M\)</span> function (i.e. loss function) and underlying model for <span class="math inline">\(f(X)\)</span>, the form of the estimator of <span class="math inline">\(\Delta(X)\)</span>. The name of each possible value for <code>loss</code> has two parts:</p>
<ol style="list-style-type: decimal">
<li>The first part, which corresponds to the <span class="math inline">\(M\)</span> function</li>
<li>The second part, which corresponds to the form of <span class="math inline">\(f(X)\)</span> and whether variable selection via the lasso is used</li>
</ol>
<p>An example is <code>sq_loss_lasso</code>, which corresponds to using <span class="math inline">\(M(y, v) = (y - v) ^ 2\)</span>, a linear form of <span class="math inline">\(f\)</span>, i.e. <span class="math inline">\(f(X) = X^T\beta\)</span>, and an additional penalty term <span class="math inline">\(\sum_{j = 1}^p|\beta_j|\)</span> added to the loss function for variable selection. Other forms of <span class="math inline">\(M\)</span> are <code>logistic_loss</code>, which corresponds to the negative log-likelihood for a logistic regression model, and <code>cox_loss</code>, which corresponds to the negative log-likelihood for the Cox proportional hazards model, <code>abs_loss</code> for <span class="math inline">\(M(y, v) = |y - v|\)</span>, and <code>huberized_loss</code> for a huberized hinge loss <span class="math inline">\(M(y, v) = (1 - yv) ^ 2/(2\delta)I(1 - \delta &lt; yv \leq 1) + (1 - yv - \delta/2)I(yv \leq 1 - \delta)\)</span> for binary outcomes.</p>
<p>All options containing <code>lasso</code> in the name use the <code>cv.glmnet()</code> function of the <code>glmnet</code> package for the underlying model fitting and variable selection. Please see the documentation of <code>cv.glmnet()</code> for information about other arguments which can be passed to it.</p>
<p>Any options for <code>loss</code> which end with <code>lasso_gam</code> have a two-stage model. Variables are selected using a linear or generalized linear model in the first stage and then the selected variables are used in a generalized additive model in the second stage. Univariate nonparametric smoother terms are used in the second stage for all continuous variables. Binary variables are used as linear terms in the model. All <code>loss</code> options containing <code>gam</code> in the name use the <code>gam()</code> function of the <code>R</code> package <code>mgcv</code>. Please see the documentation of <code>gam()</code> for information about other arguments which can be passed to it.</p>
<p>All options that end in <code>gbm</code> use a gradient-boosted decision tree model for <span class="math inline">\(f(X)\)</span>. These models are machine learning models which can provide more flexible estimation. These models are essentially a sum of many decision trees models. However, this procedure results in a “black box” model which may be more challenging or impossible to interpret. The <code>gbm</code>-based models are fit using the <code>gbm</code> <code>R</code> package. Please see the documentation for the <code>gbm</code> function of the <code>gbm</code> package for more details on the possible arguments. Tuning the values of the hyperparameters <code>shrinkage</code>, <code>n.trees</code>, and <code>interaction.depth</code> is crucial for a successful gradient-boosting model. These arguments can be passed to the <code>fit.subgroup()</code> function. By default, when <code>gbm</code>-based models are used, a plot of the cross validation error versus the number of trees is displayed. If this plot has values which are still decreasing significantly by the maximum value of the number of trees, then it is recommended to either increase the number of trees (<code>n.trees</code>), the maximum tree depth (<code>interaction.depth</code>), or increase the step size of the algorithm (<code>shrinkage</code>).</p>
</div>
<div id="method" class="section level4">
<h4><span class="header-section-number">3.3.2.6</span> <code>method</code></h4>
<p>The <code>method</code> argument is used to specify whether the weighting or A-learning model is used. Specify <code>'weighting'</code> for the weighting method and specify <code>'a_learning'</code> for the A-learning method.</p>
</div>
<div id="larger.outcome.better" class="section level4">
<h4><span class="header-section-number">3.3.2.7</span> <code>larger.outcome.better</code></h4>
<p>The argument <code>larger.outcome.better</code> is a boolean variable indicating whether larger values of the outcome are better or preferred. If <code>larger.outcome.better = TRUE</code>, then <code>fit.subgroup()</code> will seek to estimate subgroups in a way that maximizes the population average outcome and if <code>larger.outcome.better = FALSE</code>, <code>fit.subgroup()</code> will seek to minimize the population average outcome.</p>
</div>
<div id="cutpoint" class="section level4">
<h4><span class="header-section-number">3.3.2.8</span> <code>cutpoint</code></h4>
<p>The cutpoint is the value of the benefit score (i.e. <span class="math inline">\(f(X)\)</span>) above which patients will be recommended the treatment. In other words for outcomes where larger values are better and a cutpoint with value <span class="math inline">\(c\)</span> if <span class="math inline">\(f(x) &gt; c\)</span> for a patient with covariate values <span class="math inline">\(X = x\)</span>, then they will be recommended to have the treatment instead of recommended the control. If lower values are better for the outcome, <span class="math inline">\(c\)</span> will be the value below which patients will be recommended the treatment (i.e. a patient will be recommended the treatment if <span class="math inline">\(f(x) &lt; c\)</span>). By default, the cutpoint is the population-average optimal value of 0. However, users may wish to increase this value if there are limited resources for treatment allocation.</p>
</div>
<div id="retcall" class="section level4">
<h4><span class="header-section-number">3.3.2.9</span> <code>retcall</code></h4>
<p>The argument <code>retcall</code> is a boolean variable which indicates whether to return the arguments passed to <code>fit.subgroup()</code>. It must be set to <code>TRUE</code> if the user wishes to later validate the fitted model object from <code>fit.subgroup()</code> using the <code>validate.subgroup()</code> function. This is necessary because when <code>retcall = TRUE</code>, the design matrix <code>x</code>, response <code>y</code>, and treatment vector <code>trt</code> must be re-sampled in either the bootstrap procedure or training and testing resampling procedure of <code>validate.subgroup()</code>. The only time when <code>retcall</code> should be set to <code>FALSE</code> is when the design matrix is too big to be stored in the fitted model object.</p>
</div>
<div id="section" class="section level4">
<h4><span class="header-section-number">3.3.2.10</span> <code>...</code></h4>
<p>The argument <code>...</code> is used to pass arguments to the underlying modeling functions. For example, if the lasso is specified to be used in the <code>loss</code> argument, <code>...</code> is used to pass arguments to the <code>cv.glmnet()</code> function from the <code>glmnet</code> <code>R</code> package. If <code>gam</code> is present in the name for the <code>loss</code> argument, the underlying model is fit using the <code>gam()</code> function of <code>mgcv</code>, so arguments to <code>gam()</code> can be passed using <code>...</code>. The only tricky part for <code>gam()</code> is that it also has an argument titled <code>method</code> and hence instead, to change the <code>method</code> argument of <code>gam()</code>, the user can pass values using <code>method.gam</code> which will then be passed as the argument for <code>method</code> in the <code>gam()</code> function.</p>
</div>
</div>
<div id="continuous-outcomes" class="section level3">
<h3><span class="header-section-number">3.3.3</span> Continuous Outcomes</h3>
<p>The <code>loss</code> argument options that are available for continuous outcomes are:</p>
<ul>
<li><code>'sq_loss_lasso'</code></li>
<li><code>'owl_logistic_loss_lasso'</code></li>
<li><code>'owl_logistic_flip_loss_lasso'</code></li>
<li><code>'owl_hinge_loss'</code></li>
<li><code>'owl_hinge_flip_loss'</code></li>
<li><code>'sq_loss_lasso_gam'</code></li>
<li><code>'owl_logistic_loss_lasso_gam'</code></li>
<li><code>'sq_loss_gam'</code></li>
<li><code>'owl_logistic_loss_gam'</code></li>
<li><code>'sq_loss_gbm'</code></li>
<li><code>'abs_loss_gbm'</code></li>
</ul>
<div class="sourceCode" id="cb18"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb18-1" data-line-number="1">subgrp.model2 &lt;-<span class="st"> </span><span class="kw">fit.subgroup</span>(<span class="dt">x =</span> x, <span class="dt">y =</span> y,</a>
<a class="sourceLine" id="cb18-2" data-line-number="2">                             <span class="dt">trt =</span> trt,</a>
<a class="sourceLine" id="cb18-3" data-line-number="3">                             <span class="dt">propensity.func =</span> prop.func,</a>
<a class="sourceLine" id="cb18-4" data-line-number="4">                             <span class="dt">loss   =</span> <span class="st">&quot;sq_loss_lasso_gam&quot;</span>,</a>
<a class="sourceLine" id="cb18-5" data-line-number="5">                             <span class="dt">nfolds =</span> <span class="dv">10</span>)              <span class="co"># option for cv.glmnet</span></a>
<a class="sourceLine" id="cb18-6" data-line-number="6"></a>
<a class="sourceLine" id="cb18-7" data-line-number="7"><span class="kw">summary</span>(subgrp.model2)</a></code></pre></div>
<pre><code>## family:    gaussian 
## loss:      sq_loss_lasso_gam 
## method:    weighting 
## cutpoint:  0 
## propensity 
## function:  propensity.func 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Outcomes:
##                 Recommended 0      Recommended 1
## Received 0 -12.0824 (n = 201) -14.7591 (n = 209)
## Received 1  -20.517 (n = 288)  -5.5316 (n = 302)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##                           8.4346 (n = 489) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##                           9.2275 (n = 511) 
## 
## NOTE: The above average outcomes are biased estimates of
##       the expected outcomes conditional on subgroups. 
##       Use 'validate.subgroup()' to obtain unbiased estimates.
## 
## ---------------------------------------------------
## 
## Benefit score quantiles (f(X) for 1 vs 0): 
##       0%      25%      50%      75%     100% 
## -21.2251  -4.1385   0.1479   4.4585  22.4945 
## 
## ---------------------------------------------------
## 
## Summary of individual treatment effects: 
## E[Y|T=1, X] - E[Y|T=0, X]
## 
##     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
## -42.4502  -8.2769   0.2958   0.3248   8.9169  44.9889 
## 
## ---------------------------------------------------
## The following summary pertains to estimated treatment-covariate interactions:</code></pre>
<pre><code>## 
## Family: gaussian 
## Link function: identity 
## 
## Formula:
## y ~ -1 + Trt1 + s(V1) + s(V2) + s(V3) + s(V11) + s(V13)
## 
## Parametric coefficients:
##      Estimate Std. Error t value Pr(&gt;|t|)
## Trt1  0.08914    0.89547     0.1    0.921
## 
## Approximate significance of smooth terms:
##          edf Ref.df     F  p-value    
## s(V1)  2.894  3.680 2.363 0.078723 .  
## s(V2)  2.934  3.743 4.947 0.000785 ***
## s(V3)  1.000  1.000 8.232 0.004201 ** 
## s(V11) 6.402  7.584 1.925 0.048012 *  
## s(V13) 1.000  1.000 5.418 0.020126 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-sq.(adj) =  0.051   Deviance explained =  6.3%
## GCV =   1520  Scale est. = 1496.9    n = 1000</code></pre>
</div>
<div id="binary-outcomes" class="section level3">
<h3><span class="header-section-number">3.3.4</span> Binary Outcomes</h3>
<p>The <code>loss</code> argument options that are available for binary outcomes are all of the losses for continuous outcomes plus:</p>
<ul>
<li><code>'logistic_loss_lasso'</code></li>
<li><code>'logistic_loss_lasso_gam'</code></li>
<li><code>'logistic_loss_gam'</code></li>
<li><code>'logistic_loss_gbm'</code></li>
</ul>
<p>Note that all options that are available for continuous options can also potentially be used for binary outcomes.</p>
<div class="sourceCode" id="cb21"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb21-1" data-line-number="1"><span class="co"># create binary outcomes</span></a>
<a class="sourceLine" id="cb21-2" data-line-number="2">y.binary &lt;-<span class="st"> </span><span class="dv">1</span> <span class="op">*</span><span class="st"> </span>(xbeta <span class="op">+</span><span class="st"> </span><span class="kw">rnorm</span>(n.obs, <span class="dt">sd =</span> <span class="dv">2</span>) <span class="op">&gt;</span><span class="st"> </span><span class="dv">0</span> )</a></code></pre></div>
<div class="sourceCode" id="cb22"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb22-1" data-line-number="1">subgrp.bin &lt;-<span class="st"> </span><span class="kw">fit.subgroup</span>(<span class="dt">x =</span> x, <span class="dt">y =</span> y.binary,</a>
<a class="sourceLine" id="cb22-2" data-line-number="2">                           <span class="dt">trt =</span> trt,</a>
<a class="sourceLine" id="cb22-3" data-line-number="3">                           <span class="dt">propensity.func =</span> prop.func,</a>
<a class="sourceLine" id="cb22-4" data-line-number="4">                           <span class="dt">loss   =</span> <span class="st">&quot;logistic_loss_lasso&quot;</span>,</a>
<a class="sourceLine" id="cb22-5" data-line-number="5">                           <span class="dt">nfolds =</span> <span class="dv">10</span>)      <span class="co"># option for cv.glmnet</span></a></code></pre></div>
<p>When gradient-boosted decision trees are used for <span class="math inline">\(f(X)\)</span> by the package <code>gbm</code>, care must be taken to choose the hyperparameters effectively. Specifically, <code>shrinkage</code> (similar to the step-size in gradient descent), <code>n.trees</code> (the number of trees to fit), and <code>interaction.depth</code> (the maximum depth of each tree) should be tuned according to the data at hand. By default for gradient-boosting models, <code>fit.subgroup</code> plots the cross validation error versus the number of trees to give the user a sense of if their choice of tuning parameters is effective.</p>
<div class="sourceCode" id="cb23"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb23-1" data-line-number="1">subgrp.bin2 &lt;-<span class="st"> </span><span class="kw">fit.subgroup</span>(<span class="dt">x =</span> x, <span class="dt">y =</span> y.binary,</a>
<a class="sourceLine" id="cb23-2" data-line-number="2">                            <span class="dt">trt =</span> trt,</a>
<a class="sourceLine" id="cb23-3" data-line-number="3">                            <span class="dt">propensity.func =</span> prop.func,</a>
<a class="sourceLine" id="cb23-4" data-line-number="4">                            <span class="dt">loss =</span> <span class="st">&quot;logistic_loss_gbm&quot;</span>,</a>
<a class="sourceLine" id="cb23-5" data-line-number="5">                            <span class="dt">shrinkage =</span> <span class="fl">0.025</span>,  <span class="co"># options for gbm</span></a>
<a class="sourceLine" id="cb23-6" data-line-number="6">                            <span class="dt">n.trees =</span> <span class="dv">1500</span>,</a>
<a class="sourceLine" id="cb23-7" data-line-number="7">                            <span class="dt">interaction.depth =</span> <span class="dv">3</span>,</a>
<a class="sourceLine" id="cb23-8" data-line-number="8">                            <span class="dt">cv.folds =</span> <span class="dv">5</span>)</a></code></pre></div>
<p>We can see that at least on the training data, the performance of the gradient-boosting model is better.</p>
<div class="sourceCode" id="cb24"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb24-1" data-line-number="1">subgrp.bin</a></code></pre></div>
<pre><code>## family:    binomial 
## loss:      logistic_loss_lasso 
## method:    weighting 
## cutpoint:  0 
## propensity 
## function:  propensity.func 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Outcomes:
##               Recommended 0    Recommended 1
## Received 0 0.4377 (n = 181) 0.2059 (n = 229)
## Received 1 0.2293 (n = 270) 0.4367 (n = 320)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##                           0.2084 (n = 451) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##                           0.2308 (n = 549) 
## 
## NOTE: The above average outcomes are biased estimates of
##       the expected outcomes conditional on subgroups. 
##       Use 'validate.subgroup()' to obtain unbiased estimates.
## 
## ---------------------------------------------------
## 
## Benefit score quantiles (f(X) for 1 vs 0): 
##       0%      25%      50%      75%     100% 
## -1.31720 -0.24199  0.06729  0.38384  1.48203 
## 
## ---------------------------------------------------
## 
## Summary of individual treatment effects: 
## E[Y|T=1, X] - E[Y|T=0, X]
## 
##     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
## -0.57743 -0.12041  0.03363  0.02991  0.18960  0.62976</code></pre>
</div>
<div id="count-outcomes" class="section level3">
<h3><span class="header-section-number">3.3.5</span> Count Outcomes</h3>
<p>The <code>loss</code> argument options that are available for count outcomes are all of the losses for continuous outcomes plus:</p>
<ul>
<li><code>'poisson_loss_lasso'</code></li>
<li><code>'poisson_loss_lasso_gam'</code></li>
<li><code>'poisson_loss_gam'</code></li>
<li><code>'poisson_loss_gbm'</code></li>
</ul>
</div>
<div id="time-to-event-outcomes" class="section level3">
<h3><span class="header-section-number">3.3.6</span> Time-to-event Outcomes</h3>
<p>The <code>loss</code> argument options that are available for continuous outcomes are:</p>
<ul>
<li><code>'cox_loss_lasso'</code></li>
<li><code>'cox_loss_gbm'</code></li>
</ul>
<p>First we will generate time-to-event outcomes to illustrate usage of the <code>fit.subgroup()</code> model.</p>
<div class="sourceCode" id="cb26"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb26-1" data-line-number="1"><span class="co"># create time-to-event outcomes</span></a>
<a class="sourceLine" id="cb26-2" data-line-number="2">surv.time &lt;-<span class="st"> </span><span class="kw">exp</span>(<span class="op">-</span><span class="dv">20</span> <span class="op">-</span><span class="st"> </span>xbeta <span class="op">+</span><span class="st"> </span><span class="kw">rnorm</span>(n.obs, <span class="dt">sd =</span> <span class="dv">1</span>))</a>
<a class="sourceLine" id="cb26-3" data-line-number="3">cens.time &lt;-<span class="st"> </span><span class="kw">exp</span>(<span class="kw">rnorm</span>(n.obs, <span class="dt">sd =</span> <span class="dv">3</span>))</a>
<a class="sourceLine" id="cb26-4" data-line-number="4">y.time.to.event  &lt;-<span class="st"> </span><span class="kw">pmin</span>(surv.time, cens.time)</a>
<a class="sourceLine" id="cb26-5" data-line-number="5">status           &lt;-<span class="st"> </span><span class="dv">1</span> <span class="op">*</span><span class="st"> </span>(surv.time <span class="op">&lt;=</span><span class="st"> </span>cens.time)</a></code></pre></div>
<p>For subgroup identification models for time-to-event outcomes, the user should provide <code>fit.subgroup()</code> with a <code>Surv</code> object for <code>y</code>. This can be done like the following:</p>
<div class="sourceCode" id="cb27"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb27-1" data-line-number="1"><span class="kw">library</span>(survival)</a>
<a class="sourceLine" id="cb27-2" data-line-number="2"><span class="kw">set.seed</span>(<span class="dv">123</span>)</a>
<a class="sourceLine" id="cb27-3" data-line-number="3">subgrp.cox &lt;-<span class="st"> </span><span class="kw">fit.subgroup</span>(<span class="dt">x =</span> x, <span class="dt">y =</span> <span class="kw">Surv</span>(y.time.to.event, status),</a>
<a class="sourceLine" id="cb27-4" data-line-number="4">                           <span class="dt">trt =</span> trt,</a>
<a class="sourceLine" id="cb27-5" data-line-number="5">                           <span class="dt">propensity.func =</span> prop.func,</a>
<a class="sourceLine" id="cb27-6" data-line-number="6">                           <span class="dt">method =</span> <span class="st">&quot;weighting&quot;</span>,</a>
<a class="sourceLine" id="cb27-7" data-line-number="7">                           <span class="dt">loss   =</span> <span class="st">&quot;cox_loss_lasso&quot;</span>,</a>
<a class="sourceLine" id="cb27-8" data-line-number="8">                           <span class="dt">nfolds =</span> <span class="dv">10</span>)      <span class="co"># option for cv.glmnet</span></a></code></pre></div>
<p>The subgroup treatment effects are estimated using the restricted mean statistic and can be displayed with <code>summary.subgroup_fitted()</code> or <code>print.subgroup_fitted()</code> like the following:</p>
<div class="sourceCode" id="cb28"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb28-1" data-line-number="1"><span class="kw">summary</span>(subgrp.cox)</a></code></pre></div>
<pre><code>## family:    cox 
## loss:      cox_loss_lasso 
## method:    weighting 
## cutpoint:  0 
## propensity 
## function:  propensity.func 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Outcomes:
##                 Recommended 0      Recommended 1
## Received 0 511.0578 (n = 260)  48.5139 (n = 150)
## Received 1 136.5828 (n = 369) 182.7359 (n = 221)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##                          374.475 (n = 629) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##                          134.222 (n = 371) 
## 
## NOTE: The above average outcomes are biased estimates of
##       the expected outcomes conditional on subgroups. 
##       Use 'validate.subgroup()' to obtain unbiased estimates.
## 
## ---------------------------------------------------
## 
## Benefit score quantiles (f(X) for 1 vs 0): 
##       0%      25%      50%      75%     100% 
## -0.56536 -0.17097 -0.05576  0.06300  0.63843 
## 
## ---------------------------------------------------
## 
## Summary of individual treatment effects: 
## E[Y|T=1, X] / E[Y|T=0, X]
## 
## Note: for survival outcomes, the above ratio is 
## E[g(Y)|T=1, X] / E[g(Y)|T=0, X], 
## where g() is a monotone increasing function of Y, 
## the survival time
## 
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.5281  0.9389  1.0573  1.0701  1.1865  1.7601 
## 
## ---------------------------------------------------
## 
## 7 out of 49 interactions selected in total by the lasso (cross validation criterion).
## 
## The first estimate is the treatment main effect, which is always selected. 
## Any other variables selected represent treatment-covariate interactions.
## 
##           Trt1     V2     V3      V8     V11    V13    V17     V47    V50
## Estimate 0.049 0.0496 -0.007 -0.0034 -0.0228 0.0075 0.0014 -0.0015 0.0113</code></pre>
</div>
<div id="efficiency-augmentation" class="section level3">
<h3><span class="header-section-number">3.3.7</span> Efficiency Augmentation</h3>
<p>The <code>personalized</code> package also allows for efficiency augmentation of the subgroup identification models for continuous outcomes. The basic idea of efficiency augmentation is to construct a model for the main effects of the model and shift the outcome based on these main effects. The resulting estimator based on the shifted outcome can be more efficient than using the outcome itself.</p>
<p>In the <code>personalized</code> package, this involves providing <code>fit.subgroup()</code> a function which inputs the covariate information <code>x</code> and the outcomes <code>y</code> and outputs a prediction for <code>y</code> based on <code>x</code>. The following is an example of such a function:</p>
<div class="sourceCode" id="cb30"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb30-1" data-line-number="1">adjustment.func &lt;-<span class="st"> </span><span class="cf">function</span>(x, y)</a>
<a class="sourceLine" id="cb30-2" data-line-number="2">{</a>
<a class="sourceLine" id="cb30-3" data-line-number="3">    df.x  &lt;-<span class="st"> </span><span class="kw">data.frame</span>(x)</a>
<a class="sourceLine" id="cb30-4" data-line-number="4">    </a>
<a class="sourceLine" id="cb30-5" data-line-number="5">    <span class="co"># add all squared terms to model</span></a>
<a class="sourceLine" id="cb30-6" data-line-number="6">    form  &lt;-<span class="st"> </span><span class="kw">eval</span>(<span class="kw">paste</span>(<span class="st">&quot; ~ -1 + &quot;</span>, </a>
<a class="sourceLine" id="cb30-7" data-line-number="7">                <span class="kw">paste</span>(<span class="kw">paste</span>(<span class="st">'poly('</span>, <span class="kw">colnames</span>(df.x), <span class="st">', 2)'</span>, <span class="dt">sep=</span><span class="st">''</span>), </a>
<a class="sourceLine" id="cb30-8" data-line-number="8">                      <span class="dt">collapse=</span><span class="st">&quot; + &quot;</span>)))</a>
<a class="sourceLine" id="cb30-9" data-line-number="9">    mm    &lt;-<span class="st"> </span><span class="kw">model.matrix</span>(<span class="kw">as.formula</span>(form), <span class="dt">data =</span> df.x)</a>
<a class="sourceLine" id="cb30-10" data-line-number="10">    cvmod &lt;-<span class="st"> </span><span class="kw">cv.glmnet</span>(<span class="dt">y =</span> y, <span class="dt">x =</span> mm, <span class="dt">nfolds =</span> <span class="dv">10</span>)</a>
<a class="sourceLine" id="cb30-11" data-line-number="11">    predictions &lt;-<span class="st"> </span><span class="kw">predict</span>(cvmod, <span class="dt">newx =</span> mm, <span class="dt">s =</span> <span class="st">&quot;lambda.min&quot;</span>)</a>
<a class="sourceLine" id="cb30-12" data-line-number="12">    predictions</a>
<a class="sourceLine" id="cb30-13" data-line-number="13">}</a></code></pre></div>
<p>Then this can be used in <code>fit.subgroup()</code> by passing the function to the argument <code>augment.func</code> like the following:</p>
<div class="sourceCode" id="cb31"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb31-1" data-line-number="1">subgrp.model.eff &lt;-<span class="st"> </span><span class="kw">fit.subgroup</span>(<span class="dt">x =</span> x, <span class="dt">y =</span> y,</a>
<a class="sourceLine" id="cb31-2" data-line-number="2">                             <span class="dt">trt =</span> trt,</a>
<a class="sourceLine" id="cb31-3" data-line-number="3">                             <span class="dt">propensity.func =</span> prop.func,</a>
<a class="sourceLine" id="cb31-4" data-line-number="4">                             <span class="dt">loss   =</span> <span class="st">&quot;sq_loss_lasso&quot;</span>,</a>
<a class="sourceLine" id="cb31-5" data-line-number="5">                             <span class="dt">augment.func =</span> adjustment.func,</a>
<a class="sourceLine" id="cb31-6" data-line-number="6">                             <span class="dt">nfolds =</span> <span class="dv">10</span>)              <span class="co"># option for cv.glmnet</span></a>
<a class="sourceLine" id="cb31-7" data-line-number="7"></a>
<a class="sourceLine" id="cb31-8" data-line-number="8"><span class="kw">summary</span>(subgrp.model.eff)</a></code></pre></div>
<pre><code>## family:    gaussian 
## loss:      sq_loss_lasso 
## method:    weighting 
## cutpoint:  0 
## augmentation 
## function: augment.func 
## propensity 
## function:  propensity.func 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Outcomes:
##                 Recommended 0      Recommended 1
## Received 0  -7.8842 (n = 202) -19.1073 (n = 208)
## Received 1 -15.8012 (n = 292)  -9.7736 (n = 298)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##                            7.917 (n = 494) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##                           9.3337 (n = 506) 
## 
## NOTE: The above average outcomes are biased estimates of
##       the expected outcomes conditional on subgroups. 
##       Use 'validate.subgroup()' to obtain unbiased estimates.
## 
## ---------------------------------------------------
## 
## Benefit score quantiles (f(X) for 1 vs 0): 
##        0%       25%       50%       75%      100% 
## -11.70152  -2.57836   0.04442   2.53750  10.44856 
## 
## ---------------------------------------------------
## 
## Summary of individual treatment effects: 
## E[Y|T=1, X] - E[Y|T=0, X]
## 
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -23.40304  -5.15672   0.08885  -0.04936   5.07500  20.89711 
## 
## ---------------------------------------------------
## 
## 6 out of 50 interactions selected in total by the lasso (cross validation criterion).
## 
## The first estimate is the treatment main effect, which is always selected. 
## Any other variables selected represent treatment-covariate interactions.
## 
##             Trt1     V2      V3     V11     V20     V30     V47
## Estimate -0.0815 0.8629 -0.5513 -0.7276 -0.0104 -0.0367 -0.1321</code></pre>
</div>
<div id="plotting-fitted-models" class="section level3">
<h3><span class="header-section-number">3.3.8</span> Plotting Fitted Models</h3>
<p>The outcomes (or average outcomes) of patients within different subgroups can be plotted using the <code>plot()</code> function. In particular, this function plots patient outcomes by treatment group within each subgroup of patients (those recommended the treatment by the model and those recommended the control by the model). Boxplots of the outcomes can be plotted in addition to densities and and interaction plot of the average outcomes within each of these groups. They can all be generated like the following:</p>
<div class="sourceCode" id="cb33"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb33-1" data-line-number="1"><span class="kw">plot</span>(subgrp.model)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<div class="sourceCode" id="cb34"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb34-1" data-line-number="1"><span class="kw">plot</span>(subgrp.model, <span class="dt">type =</span> <span class="st">&quot;density&quot;</span>)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<div class="sourceCode" id="cb35"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb35-1" data-line-number="1"><span class="kw">plot</span>(subgrp.model, <span class="dt">type =</span> <span class="st">&quot;interaction&quot;</span>)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<p>Multiple models can be visually compared using the <code>plotCompare()</code> function, which offers the same plotting options as the <code>plot.subgroup_fitted()</code> function.</p>
<div class="sourceCode" id="cb36"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb36-1" data-line-number="1"><span class="kw">plotCompare</span>(subgrp.model, subgrp.model.eff)</a></code></pre></div>
<p><img src="" /><!-- --></p>
</div>
<div id="comparing-subgroups-from-a-fitted-model" class="section level3">
<h3><span class="header-section-number">3.3.9</span> Comparing Subgroups from a Fitted Model</h3>
<p>The <code>summarize.subgroups()</code> function compares the means of covariate values within the estimated subgroups. P-values for the differences within subgroups are also computed. For continuous variables, the p-value will come from a t-test and for binary variables, the p-value will come from a chi-squared test.</p>
<div class="sourceCode" id="cb37"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb37-1" data-line-number="1">comp &lt;-<span class="st"> </span><span class="kw">summarize.subgroups</span>(subgrp.model)</a></code></pre></div>
<p>The user can optionally print only the covariates which have significant differences between subgroups with a p-value below a given threshold like the following:</p>
<div class="sourceCode" id="cb38"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb38-1" data-line-number="1"><span class="kw">print</span>(comp, <span class="dt">p.value =</span> <span class="fl">0.01</span>)</a></code></pre></div>
<pre><code>##     Avg (recom 0) Avg (recom 1)   0 - 1 pval 0 - 1 SE (recom 0)
## V1        -0.3561        0.3957 -0.7518  6.564e-05       0.1386
## V2        -1.5303        1.5509 -3.0812  2.128e-66       0.1211
## V3         1.0502       -1.0140  2.0642  1.650e-30       0.1246
## V6         0.4025       -0.1534  0.5559  3.241e-03       0.1405
## V11        1.2063       -0.8904  2.0967  1.095e-29       0.1300
## V13       -0.9325        0.4492 -1.3818  2.738e-13       0.1358
## V14       -0.1832        0.3555 -0.5387  4.690e-03       0.1399
## V17       -0.8114        0.4091 -1.2206  1.618e-10       0.1328
## V19        0.1809       -0.3086  0.4895  8.898e-03       0.1361
## V37        0.5381       -0.5264  1.0645  2.390e-08       0.1376
##     SE (recom 1)
## V1        0.1263
## V2        0.1129
## V3        0.1212
## V6        0.1254
## V11       0.1234
## V13       0.1278
## V14       0.1287
## V17       0.1343
## V19       0.1279
## V37       0.1298</code></pre>
<p>The covariate values and estimated subgroups can be directly used by the <code>summarize.subgroups()</code> function:</p>
<div class="sourceCode" id="cb40"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb40-1" data-line-number="1">comp2 &lt;-<span class="st"> </span><span class="kw">summarize.subgroups</span>(x, <span class="dt">subgroup =</span> subgrp.model<span class="op">$</span>benefit.scores <span class="op">&gt;</span><span class="st"> </span><span class="dv">0</span>)</a></code></pre></div>
</div>
</div>
<div id="validating-subgroup-identification-models" class="section level2">
<h2><span class="header-section-number">3.4</span> Validating Subgroup Identification Models</h2>
<div id="overview-2" class="section level3">
<h3><span class="header-section-number">3.4.1</span> Overview</h3>
<p>An important aspect of estimating the impact of estimated subgroups is obtaining estimates of the treatment effect within the estimated subgroups. Ideally, the treatment should have a positive impact within the subgroup of patients who are recommended to the treatment and the control should have a positive impact within the subgroup of patients who were not recommended the treatment.</p>
<p>Since our estimated subgroups are conditional on observing the outcomes of the patients, taking the average outcomes by treatment status within each subgroup to estimate the treatment effects within subgroups will yield biased and typically overly-optimistic estimates. Instead, we need to use resampling-based procedures to estimate these effects reliably. There are two methods for subgroup treatment effect estimation. Both methods are available using the <code>validate.subgroup()</code> function.</p>
</div>
<div id="repeated-trainingtest-splitting" class="section level3">
<h3><span class="header-section-number">3.4.2</span> Repeated Training/Test Splitting</h3>
<p>The first method is prediction-based. For each replication in this procedure, data are randomly partitioned into a training and testing portion. For each replocation the subgroup identification model is estimated using the training procedure and the subgroup treatment effects are estimated using the test data. This method requires two arguments to be passed to <code>validate.subgroup()</code>. The first argument is <code>B</code>, the number of replications and the second argument is <code>train.fraction</code>, which is the proportion of all samples which will be used for training (hence <code>1 - train.fraction</code> is the portion of samples used for testing).</p>
<p>The main object which needs to be passed to <code>validate.subgroup()</code> is a fitted object returned by the <code>fit.subgroup()</code>. Note that in order to validate a fitted object from <code>fit.subgroup()</code>, the model must be fit with the <code>fit.subgroup()</code> <code>retcall</code> set to <code>TRUE</code>.</p>
<div class="sourceCode" id="cb41"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb41-1" data-line-number="1"><span class="co"># check that the object is an object returned by fit.subgroup()</span></a>
<a class="sourceLine" id="cb41-2" data-line-number="2"><span class="kw">class</span>(subgrp.model.eff)</a></code></pre></div>
<pre><code>## [1] &quot;subgroup_fitted&quot;</code></pre>
<div class="sourceCode" id="cb43"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb43-1" data-line-number="1">validation.eff &lt;-<span class="st"> </span><span class="kw">validate.subgroup</span>(subgrp.model.eff, </a>
<a class="sourceLine" id="cb43-2" data-line-number="2">                                 <span class="dt">B =</span> 25L,  <span class="co"># specify the number of replications</span></a>
<a class="sourceLine" id="cb43-3" data-line-number="3">                                 <span class="dt">method =</span> <span class="st">&quot;training_test_replication&quot;</span>,</a>
<a class="sourceLine" id="cb43-4" data-line-number="4">                                 <span class="dt">train.fraction =</span> <span class="fl">0.75</span>)</a>
<a class="sourceLine" id="cb43-5" data-line-number="5"></a>
<a class="sourceLine" id="cb43-6" data-line-number="6">validation.eff</a></code></pre></div>
<pre><code>## family:  gaussian 
## loss:    sq_loss_lasso 
## method:  weighting 
## 
## validation method:  training_test_replication 
## cutpoint:           0 
## replications:       25 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Test Set Outcomes:
##                               Recommended 0
## Received 0     -8.7056 (SE = 3.853, n = 50)
## Received 1 -14.8779 (SE = 2.5959, n = 73.6)
##                               Recommended 1
## Received 0 -18.409 (SE = 2.3969, n = 52.72)
## Received 1 -9.4263 (SE = 3.2005, n = 73.68)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##            6.1723 (SE = 5.5794, n = 123.6) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##            8.9827 (SE = 4.0441, n = 126.4) 
## 
## Est of 
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:                     
## 7.4119 (SE = 3.2998)</code></pre>
</div>
<div id="bootstrap-bias-correction" class="section level3">
<h3><span class="header-section-number">3.4.3</span> Bootstrap Bias Correction</h3>
<p>The second method is a bootstrap-based method which seeks to estimate the bias in the estimates of the subgroup treatment effects and then corrects for this bias (Harrell, et al. 1996).</p>
<ul>
<li><p>For a statistic <span class="math inline">\(d\)</span> let <span class="math inline">\(d_{train}(X)\)</span> be the statistic estimated with the training data and evaluated on data <span class="math inline">\(X\)</span> and <span class="math inline">\(d_{b}(X)\)</span> be the statistics estimated using a bootstrap sample <span class="math inline">\(X_b\)</span> (samples with replacement from <span class="math inline">\(X\)</span>) and evaluated on <span class="math inline">\(X\)</span></p>
<p></p></li>
<li><p>The bootstrap estimate of the amount of bias with regards to the statistic <span class="math inline">\(d\)</span> is <span class="math display">\[
{bias}(X) = \frac{1}{B}\sum_{b = 1}^B [d_b(X_b) - d_b(X) ]
\]</span></p></li>
<li><p>Then a bias-corrected estimate of the statistic <span class="math inline">\(d\)</span> is <span class="math display">\[d_{train}(X) - {bias}(X)\]</span></p></li>
</ul>
<div class="sourceCode" id="cb45"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb45-1" data-line-number="1">validation3 &lt;-<span class="st"> </span><span class="kw">validate.subgroup</span>(subgrp.model, </a>
<a class="sourceLine" id="cb45-2" data-line-number="2">                                 <span class="dt">B =</span> 25L,  <span class="co"># specify the number of replications</span></a>
<a class="sourceLine" id="cb45-3" data-line-number="3">                                 <span class="dt">method =</span> <span class="st">&quot;boot_bias_correction&quot;</span>)</a>
<a class="sourceLine" id="cb45-4" data-line-number="4"></a>
<a class="sourceLine" id="cb45-5" data-line-number="5">validation3</a></code></pre></div>
<pre><code>## family:  gaussian 
## loss:    sq_loss_lasso 
## method:  weighting 
## 
## validation method:  boot_bias_correction 
## cutpoint:           0 
## replications:       25 
## 
## benefit score: f(x), 
## Trt recom = 1*I(f(x)&gt;c)+0*I(f(x)&lt;=c) where c is 'cutpoint'
## 
## Average Bootstrap Bias-Corrected Outcomes:
##                                 Recommended 0
## Received 0 -12.0976 (SE = 1.7793, n = 193.04)
## Received 1 -16.6118 (SE = 1.3674, n = 267.76)
##                                Recommended 1
## Received 0 -15.0433 (SE = 1.5727, n = 212.6)
## Received 1  -9.2469 (SE = 1.6187, n = 326.6)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] 
##            4.5142 (SE = 2.2994, n = 460.8) 
## Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] 
##            5.7964 (SE = 2.3648, n = 539.2) 
## 
## Est of 
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:                     
## 4.9823 (SE = 1.5705)</code></pre>
</div>
<div id="plotting-validated-models" class="section level3">
<h3><span class="header-section-number">3.4.4</span> Plotting Validated Models</h3>
<p>The results for each of the iterations of either the bootstrap of the training and testing partitioning procedure can be plotted using the <code>plot()</code> function similarly to how the <code>plot()</code> function can be used for fitted objects from <code>fit.subgroup()</code>. Similarly, boxplots, density plots, and interaction plots are all available through the <code>type</code> argument:</p>
<div class="sourceCode" id="cb47"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb47-1" data-line-number="1"><span class="kw">plot</span>(validation)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<div class="sourceCode" id="cb48"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb48-1" data-line-number="1"><span class="kw">plot</span>(validation, <span class="dt">type =</span> <span class="st">&quot;density&quot;</span>)</a></code></pre></div>
<p><img src="" /><!-- --></p>
<p>Multiple validated models can be visually compared using the <code>plotCompare()</code> function, which offers the same plotting options as the <code>plot.subgroup_validated()</code> function. Here we compare the model fitted using <code>sq_loss_lasso</code> to the one fitted using <code>sq_loss_lasso</code> and efficiency augmentation:</p>
<div class="sourceCode" id="cb49"><pre class="sourceCode r"><code class="sourceCode r"><a class="sourceLine" id="cb49-1" data-line-number="1"><span class="kw">plotCompare</span>(validation, validation.eff)</a></code></pre></div>
<p>We can see above that the model with efficiency augmentation finds subgroups with more impactful treatment effects.</p>
</div>
</div>
</div>



<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
  (function () {
    var script = document.createElement("script");
    script.type = "text/javascript";
    script.src  = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
    document.getElementsByTagName("head")[0].appendChild(script);
  })();
</script>

</body>
</html>
