---

title: XCM (An Explainable Convolutional Neural Network for Multivariate Time Series Classification)


keywords: fastai
sidebar: home_sidebar

summary: "This is an unofficial PyTorch implementation by Ignacio Oguiza of  - oguiza@gmail.com based on Temporal Convolutional Network (Bai, 2018)."
description: "This is an unofficial PyTorch implementation by Ignacio Oguiza of  - oguiza@gmail.com based on Temporal Convolutional Network (Bai, 2018)."
nb_path: "nbs/114b_models.XCMPlus.ipynb"
---
<!--

#################################################
### THIS FILE WAS AUTOGENERATED! DO NOT EDIT! ###
#################################################
# file to edit: nbs/114b_models.XCMPlus.ipynb
# command to build the docs after a change: nbdev_build_docs

-->

<div class="container" id="notebook-container">
        
    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

<div class="output_wrapper">
<div class="output">

<div class="output_area">


<div class="output_markdown rendered_html output_subarea ">
<h2 id="XCMPlus" class="doc_header"><code>class</code> <code>XCMPlus</code><a href="https://github.com/timeseriesAI/tsai/tree/main/tsai/models/XCMPlus.py#L19" class="source_link" style="float:right">[source]</a></h2><blockquote><p><code>XCMPlus</code>(<strong><code>c_in</code></strong>:<code>int</code>, <strong><code>c_out</code></strong>:<code>int</code>, <strong><code>seq_len</code></strong>:<code>Optional</code>[<code>int</code>]=<em><code>None</code></em>, <strong><code>nf</code></strong>:<code>int</code>=<em><code>128</code></em>, <strong><code>window_perc</code></strong>:<code>float</code>=<em><code>1.0</code></em>, <strong><code>flatten</code></strong>:<code>bool</code>=<em><code>False</code></em>, <strong><code>custom_head</code></strong>:<code>callable</code>=<em><code>None</code></em>, <strong><code>concat_pool</code></strong>:<code>bool</code>=<em><code>False</code></em>, <strong><code>fc_dropout</code></strong>:<code>float</code>=<em><code>0.0</code></em>, <strong><code>bn</code></strong>:<code>bool</code>=<em><code>False</code></em>, <strong><code>y_range</code></strong>:<code>tuple</code>=<em><code>None</code></em>, <strong>**<code>kwargs</code></strong>) :: <a href="/models.TabFusionTransformer.html#Sequential"><code>Sequential</code></a></p>
</blockquote>
<p>A sequential container.
Modules will be added to it in the order they are passed in the
constructor. Alternatively, an <code>OrderedDict</code> of modules can be
passed in. The <code>forward()</code> method of <code>Sequential</code> accepts any
input and forwards it to the first module it contains. It then
"chains" outputs to inputs sequentially for each subsequent module,
finally returning the output of the last module.</p>
<p>The value a <code>Sequential</code> provides over manually calling a sequence
of modules is that it allows treating the whole container as a
single module, such that performing a transformation on the
<code>Sequential</code> applies to each of the modules it stores (which are
each a registered submodule of the <code>Sequential</code>).</p>
<p>What's the difference between a <code>Sequential</code> and a
:class:<code>torch.nn.ModuleList</code>? A <code>ModuleList</code> is exactly what it
sounds like--a list for storing <code>Module</code> s! On the other hand,
the layers in a <code>Sequential</code> are connected in a cascading way.</p>
<p>Example::</p>

<pre><code># Using Sequential to create a small model. When `model` is run,
# input will first be passed to `Conv2d(1,20,5)`. The output of
# `Conv2d(1,20,5)` will be used as the input to the first
# `ReLU`; the output of the first `ReLU` will become the input
# for `Conv2d(20,64,5)`. Finally, the output of
# `Conv2d(20,64,5)` will be used as input to the second `ReLU`
model = nn.Sequential(
          nn.Conv2d(1,20,5),
          nn.ReLU(),
          nn.Conv2d(20,64,5),
          nn.ReLU()
        )

# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
          ('conv1', nn.Conv2d(1,20,5)),
          ('relu1', nn.ReLU()),
          ('conv2', nn.Conv2d(20,64,5)),
          ('relu2', nn.ReLU())
        ]))</code></pre>

</div>

</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="kn">from</span> <span class="nn">tsai.data.all</span> <span class="kn">import</span> <span class="o">*</span>
<span class="kn">from</span> <span class="nn">tsai.models.XCM</span> <span class="kn">import</span> <span class="o">*</span>

<span class="n">dsid</span> <span class="o">=</span> <span class="s1">&#39;NATOPS&#39;</span>
<span class="n">X</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">splits</span> <span class="o">=</span> <span class="n">get_UCR_data</span><span class="p">(</span><span class="n">dsid</span><span class="p">,</span> <span class="n">split_data</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="n">tfms</span> <span class="o">=</span> <span class="p">[</span><span class="kc">None</span><span class="p">,</span> <span class="n">Categorize</span><span class="p">()]</span>
<span class="n">dls</span> <span class="o">=</span> <span class="n">get_ts_dls</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">splits</span><span class="o">=</span><span class="n">splits</span><span class="p">,</span> <span class="n">tfms</span><span class="o">=</span><span class="n">tfms</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span>  <span class="n">XCMPlus</span><span class="p">(</span><span class="n">dls</span><span class="o">.</span><span class="n">vars</span><span class="p">,</span> <span class="n">dls</span><span class="o">.</span><span class="n">c</span><span class="p">,</span> <span class="n">dls</span><span class="o">.</span><span class="n">len</span><span class="p">)</span>
<span class="n">learn</span> <span class="o">=</span> <span class="n">Learner</span><span class="p">(</span><span class="n">dls</span><span class="p">,</span> <span class="n">model</span><span class="p">,</span> <span class="n">metrics</span><span class="o">=</span><span class="n">accuracy</span><span class="p">)</span>
<span class="n">xb</span><span class="p">,</span> <span class="n">yb</span> <span class="o">=</span> <span class="n">dls</span><span class="o">.</span><span class="n">one_batch</span><span class="p">()</span>

<span class="n">bs</span><span class="p">,</span> <span class="n">c_in</span><span class="p">,</span> <span class="n">seq_len</span> <span class="o">=</span> <span class="n">xb</span><span class="o">.</span><span class="n">shape</span>
<span class="n">c_out</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">unique</span><span class="p">(</span><span class="n">yb</span><span class="o">.</span><span class="n">cpu</span><span class="p">()</span><span class="o">.</span><span class="n">numpy</span><span class="p">()))</span>

<span class="n">model</span> <span class="o">=</span> <span class="n">XCMPlus</span><span class="p">(</span><span class="n">c_in</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">,</span> <span class="n">fc_dropout</span><span class="o">=</span><span class="mf">.5</span><span class="p">)</span>
<span class="n">test_eq</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xb</span><span class="o">.</span><span class="n">device</span><span class="p">)(</span><span class="n">xb</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="p">(</span><span class="n">bs</span><span class="p">,</span> <span class="n">c_out</span><span class="p">))</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">XCMPlus</span><span class="p">(</span><span class="n">c_in</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">,</span> <span class="n">concat_pool</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">test_eq</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xb</span><span class="o">.</span><span class="n">device</span><span class="p">)(</span><span class="n">xb</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="p">(</span><span class="n">bs</span><span class="p">,</span> <span class="n">c_out</span><span class="p">))</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">XCMPlus</span><span class="p">(</span><span class="n">c_in</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>
<span class="n">test_eq</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xb</span><span class="o">.</span><span class="n">device</span><span class="p">)(</span><span class="n">xb</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="p">(</span><span class="n">bs</span><span class="p">,</span> <span class="n">c_out</span><span class="p">))</span>
<span class="n">test_eq</span><span class="p">(</span><span class="n">count_parameters</span><span class="p">(</span><span class="n">XCMPlus</span><span class="p">(</span><span class="n">c_in</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)),</span> <span class="n">count_parameters</span><span class="p">(</span><span class="n">XCM</span><span class="p">(</span><span class="n">c_in</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)))</span>
<span class="n">model</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">



<div class="output_text output_subarea output_execute_result">
<pre>XCMPlus(
  (backbone): _XCMPlus_Backbone(
    (conv2dblock): Sequential(
      (0): Unsqueeze(dim=1)
      (1): Conv2dSame(
        (conv2d_same): Conv2d(1, 128, kernel_size=(1, 51), stride=(1, 1))
      )
      (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (3): ReLU()
    )
    (conv2d1x1block): Sequential(
      (0): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
      (1): ReLU()
      (2): Squeeze(dim=1)
    )
    (conv1dblock): Sequential(
      (0): Conv1d(24, 128, kernel_size=(51,), stride=(1,), padding=(25,))
      (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
    (conv1d1x1block): Sequential(
      (0): Conv1d(128, 1, kernel_size=(1,), stride=(1,))
      (1): ReLU()
    )
    (concat): Concat(dim=1)
    (conv1d): Sequential(
      (0): Conv1d(25, 128, kernel_size=(51,), stride=(1,), padding=(25,))
      (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
  )
  (head): Sequential(
    (0): GAP1d(
      (gap): AdaptiveAvgPool1d(output_size=1)
      (flatten): Flatten(full=False)
    )
    (1): LinBnDrop(
      (0): Linear(in_features=128, out_features=6, bias=True)
    )
  )
)</pre>
</div>

</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">show_gradcam</span><span class="p">(</span><span class="n">xb</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">yb</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">

<div class="output_subarea output_stream output_stderr output_text">
<pre>/Users/nacho/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py:974: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
  warnings.warn(&#34;Using a non-full backward hook when the forward contains multiple autograd Nodes &#34;
</pre>
</div>
</div>

<div class="output_area">



<div class="output_png output_subarea ">
<img src="
"
>
</div>

</div>

<div class="output_area">



<div class="output_png output_subarea ">
<img src="
"
>
</div>

</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="n">bs</span> <span class="o">=</span> <span class="mi">16</span>
<span class="n">n_vars</span> <span class="o">=</span> <span class="mi">3</span>
<span class="n">seq_len</span> <span class="o">=</span> <span class="mi">12</span>
<span class="n">c_out</span> <span class="o">=</span> <span class="mi">10</span>
<span class="n">xb</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="n">bs</span><span class="p">,</span> <span class="n">n_vars</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>
<span class="n">new_head</span> <span class="o">=</span> <span class="n">partial</span><span class="p">(</span><span class="n">conv_lin_3d_head</span><span class="p">,</span> <span class="n">d</span><span class="o">=</span><span class="p">(</span><span class="mi">5</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span>
<span class="n">net</span> <span class="o">=</span> <span class="n">XCMPlus</span><span class="p">(</span><span class="n">n_vars</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">,</span> <span class="n">custom_head</span><span class="o">=</span><span class="n">new_head</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">net</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xb</span><span class="o">.</span><span class="n">device</span><span class="p">)(</span><span class="n">xb</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="n">net</span><span class="o">.</span><span class="n">head</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">

<div class="output_subarea output_stream output_stdout output_text">
<pre>torch.Size([16, 5, 2])
</pre>
</div>
</div>

<div class="output_area">



<div class="output_text output_subarea output_execute_result">
<pre>create_conv_lin_3d_head(
  (0): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (1): Conv1d(128, 5, kernel_size=(1,), stride=(1,), bias=False)
  (2): Transpose(-1, -2)
  (3): BatchNorm1d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (4): Transpose(-1, -2)
  (5): Linear(in_features=12, out_features=2, bias=False)
)</pre>
</div>

</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="n">bs</span> <span class="o">=</span> <span class="mi">16</span>
<span class="n">n_vars</span> <span class="o">=</span> <span class="mi">3</span>
<span class="n">seq_len</span> <span class="o">=</span> <span class="mi">12</span>
<span class="n">c_out</span> <span class="o">=</span> <span class="mi">2</span>
<span class="n">xb</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="n">bs</span><span class="p">,</span> <span class="n">n_vars</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>
<span class="n">net</span> <span class="o">=</span> <span class="n">XCMPlus</span><span class="p">(</span><span class="n">n_vars</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>
<span class="n">change_model_head</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="n">create_pool_plus_head</span><span class="p">,</span> <span class="n">concat_pool</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">net</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xb</span><span class="o">.</span><span class="n">device</span><span class="p">)(</span><span class="n">xb</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="n">net</span><span class="o">.</span><span class="n">head</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">

<div class="output_subarea output_stream output_stdout output_text">
<pre>torch.Size([16, 2])
</pre>
</div>
</div>

<div class="output_area">



<div class="output_text output_subarea output_execute_result">
<pre>Sequential(
  (0): AdaptiveAvgPool1d(output_size=1)
  (1): Flatten(full=False)
  (2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (3): Linear(in_features=128, out_features=512, bias=False)
  (4): ReLU(inplace=True)
  (5): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (6): Linear(in_features=512, out_features=2, bias=False)
)</pre>
</div>

</div>

</div>
</div>

</div>
    {% endraw %}

</div>
 

