---

title: TST (Time Series Transformer)


keywords: fastai
sidebar: home_sidebar

summary: "This is an unofficial PyTorch implementation by Ignacio Oguiza of  - oguiza@gmail.com based on: Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A., & Eickhoff, C. (2020). **A Transformer-based Framework for Multivariate Time Series Representation Learning**. arXiv preprint arXiv:2010.02803v2."
description: "This is an unofficial PyTorch implementation by Ignacio Oguiza of  - oguiza@gmail.com based on: Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A., & Eickhoff, C. (2020). **A Transformer-based Framework for Multivariate Time Series Representation Learning**. arXiv preprint arXiv:2010.02803v2."
nb_path: "nbs/108b_models.TST.ipynb"
---
<!--

#################################################
### THIS FILE WAS AUTOGENERATED! DO NOT EDIT! ###
#################################################
# file to edit: nbs/108b_models.TST.ipynb
# command to build the docs after a change: nbdev_build_docs

-->

<div class="container" id="notebook-container">
        
    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

</div>
    {% endraw %}

<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>This is an unofficial PyTorch implementation by Ignacio Oguiza of  - oguiza@gmail.com based on:</p>
<ul>
<li>George Zerveas et al. A Transformer-based Framework for Multivariate Time Series Representation Learning, in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '21), August 14--18, 2021. ArXiV version: <a href="https://arxiv.org/abs/2010.02803">https://arxiv.org/abs/2010.02803</a></li>
<li>Official implementation: <a href="https://github.com/gzerveas/mvts_transformer">https://github.com/gzerveas/mvts_transformer</a></li>
</ul>
<div class="highlight"><pre><span></span>@inproceedings<span class="o">{</span><span class="m">10</span>.1145/3447548.3467401,
<span class="nv">author</span> <span class="o">=</span> <span class="o">{</span>Zerveas, George and Jayaraman, Srideepika and Patel, Dhaval and Bhamidipaty, Anuradha and Eickhoff, Carsten<span class="o">}</span>,
<span class="nv">title</span> <span class="o">=</span> <span class="o">{</span>A Transformer-Based Framework <span class="k">for</span> Multivariate Time Series Representation Learning<span class="o">}</span>,
<span class="nv">year</span> <span class="o">=</span> <span class="o">{</span><span class="m">2021</span><span class="o">}</span>,
<span class="nv">isbn</span> <span class="o">=</span> <span class="o">{</span><span class="m">9781450383325</span><span class="o">}</span>,
<span class="nv">publisher</span> <span class="o">=</span> <span class="o">{</span>Association <span class="k">for</span> Computing Machinery<span class="o">}</span>,
<span class="nv">address</span> <span class="o">=</span> <span class="o">{</span>New York, NY, USA<span class="o">}</span>,
<span class="nv">url</span> <span class="o">=</span> <span class="o">{</span>https://doi.org/10.1145/3447548.3467401<span class="o">}</span>,
<span class="nv">doi</span> <span class="o">=</span> <span class="o">{</span><span class="m">10</span>.1145/3447548.3467401<span class="o">}</span>,
<span class="nv">booktitle</span> <span class="o">=</span> <span class="o">{</span>Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery <span class="p">&amp;</span>amp<span class="p">;</span> Data Mining<span class="o">}</span>,
<span class="nv">pages</span> <span class="o">=</span> <span class="o">{</span><span class="m">2114</span>–2124<span class="o">}</span>,
<span class="nv">numpages</span> <span class="o">=</span> <span class="o">{</span><span class="m">11</span><span class="o">}</span>,
<span class="nv">keywords</span> <span class="o">=</span> <span class="o">{</span>regression, framework, multivariate <span class="nb">time</span> series, classification, transformer, deep learning, self-supervised learning, unsupervised learning, imputation<span class="o">}</span>,
<span class="nv">location</span> <span class="o">=</span> <span class="o">{</span>Virtual Event, Singapore<span class="o">}</span>,
<span class="nv">series</span> <span class="o">=</span> <span class="o">{</span>KDD <span class="err">&#39;</span><span class="m">21</span><span class="o">}</span>
<span class="o">}</span>
</pre></div>
<p>This paper uses 'Attention is all you need' as a major reference:</p>
<ul>
<li>Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... &amp; Polosukhin, I. (2017). <strong>Attention is all you need</strong>. In Advances in neural information processing systems (pp. 5998-6008).</li>
</ul>
<p>This implementation is adapted to work with the rest of the <code>tsai</code> library, and contain some hyperparameters that are not available in the original implementation. They are included to experiment with them.</p>

</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<h2 id="TST-arguments">TST arguments<a class="anchor-link" href="#TST-arguments"> </a></h2>
</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>Usual values are the ones that appear in the "Attention is all you need" and "A Transformer-based Framework for Multivariate Time Series Representation Learning" papers.</p>
<p>The default values are the ones selected as a default configuration in the latter.</p>
<ul>
<li>c_in: the number of features (aka variables, dimensions, channels) in the time series dataset. dls.var</li>
<li>c_out: the number of target classes. dls.c</li>
<li>seq_len: number of time steps in the time series. dls.len</li>
<li>max_seq_len: useful to control the temporal resolution in long time series to avoid memory issues. Default. None.</li>
<li>d_model: total dimension of the model (number of features created by the model). Usual values: 128-1024. Default: 128.</li>
<li>n_heads:  parallel attention heads. Usual values: 8-16. Default: 16.</li>
<li>d_k: size of the learned linear projection of queries and keys in the MHA. Usual values: 16-512. Default: None -&gt; (d_model/n_heads) = 32.</li>
<li>d_v: size of the learned linear projection of values in the MHA. Usual values: 16-512. Default: None -&gt; (d_model/n_heads) = 32.</li>
<li>d_ff: the dimension of the feedforward network model. Usual values: 256-4096. Default: 256.</li>
<li>dropout: amount of residual dropout applied in the encoder. Usual values: 0.-0.3. Default: 0.1.</li>
<li>activation: the activation function of intermediate layer, relu or gelu. Default: 'gelu'.</li>
<li>n_layers: the number of sub-encoder-layers in the encoder. Usual values: 2-8. Default: 3.</li>
<li>fc_dropout: dropout applied to the final fully connected layer. Usual values: 0.-0.8. Default: 0.</li>
<li>y_range: range of possible y values (used in regression tasks). Default: None</li>
<li>kwargs: nn.Conv1d kwargs. If not {}, a nn.Conv1d with those kwargs will be applied to original time series.</li>
</ul>

</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<h2 id="Imports">Imports<a class="anchor-link" href="#Imports"> </a></h2>
</div>
</div>
</div>
    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

</div>
    {% endraw %}

<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<h2 id="TST">TST<a class="anchor-link" href="#TST"> </a></h2>
</div>
</div>
</div>
    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="n">t</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">16</span><span class="p">,</span> <span class="mi">50</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>
<span class="n">output</span><span class="p">,</span> <span class="n">attn</span> <span class="o">=</span> <span class="n">_MultiHeadAttention</span><span class="p">(</span><span class="n">d_model</span><span class="o">=</span><span class="mi">128</span><span class="p">,</span> <span class="n">n_heads</span><span class="o">=</span><span class="mi">3</span><span class="p">,</span> <span class="n">d_k</span><span class="o">=</span><span class="mi">8</span><span class="p">,</span> <span class="n">d_v</span><span class="o">=</span><span class="mi">6</span><span class="p">)(</span><span class="n">t</span><span class="p">,</span> <span class="n">t</span><span class="p">,</span> <span class="n">t</span><span class="p">)</span>
<span class="n">output</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="n">attn</span><span class="o">.</span><span class="n">shape</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">



<div class="output_text output_subarea output_execute_result">
<pre>(torch.Size([16, 50, 128]), torch.Size([16, 3, 50, 50]))</pre>
</div>

</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="n">t</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">16</span><span class="p">,</span> <span class="mi">50</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">_TSTEncoderLayer</span><span class="p">(</span><span class="n">q_len</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">d_model</span><span class="o">=</span><span class="mi">128</span><span class="p">,</span> <span class="n">n_heads</span><span class="o">=</span><span class="mi">3</span><span class="p">,</span> <span class="n">d_k</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">d_v</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">d_ff</span><span class="o">=</span><span class="mi">512</span><span class="p">,</span> <span class="n">dropout</span><span class="o">=</span><span class="mf">0.1</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">&#39;gelu&#39;</span><span class="p">)(</span><span class="n">t</span><span class="p">)</span>
<span class="n">output</span><span class="o">.</span><span class="n">shape</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">



<div class="output_text output_subarea output_execute_result">
<pre>torch.Size([16, 50, 128])</pre>
</div>

</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

<div class="output_wrapper">
<div class="output">

<div class="output_area">


<div class="output_markdown rendered_html output_subarea ">
<h2 id="TST" class="doc_header"><code>class</code> <code>TST</code><a href="https://github.com/timeseriesAI/tsai/tree/main/tsai/models/TST.py#L131" class="source_link" style="float:right">[source]</a></h2><blockquote><p><code>TST</code>(<strong><code>c_in</code></strong>:<code>int</code>, <strong><code>c_out</code></strong>:<code>int</code>, <strong><code>seq_len</code></strong>:<code>int</code>, <strong><code>max_seq_len</code></strong>:<code>Optional</code>[<code>int</code>]=<em><code>None</code></em>, <strong><code>n_layers</code></strong>:<code>int</code>=<em><code>3</code></em>, <strong><code>d_model</code></strong>:<code>int</code>=<em><code>128</code></em>, <strong><code>n_heads</code></strong>:<code>int</code>=<em><code>16</code></em>, <strong><code>d_k</code></strong>:<code>Optional</code>[<code>int</code>]=<em><code>None</code></em>, <strong><code>d_v</code></strong>:<code>Optional</code>[<code>int</code>]=<em><code>None</code></em>, <strong><code>d_ff</code></strong>:<code>int</code>=<em><code>256</code></em>, <strong><code>dropout</code></strong>:<code>float</code>=<em><code>0.1</code></em>, <strong><code>act</code></strong>:<code>str</code>=<em><code>'gelu'</code></em>, <strong><code>fc_dropout</code></strong>:<code>float</code>=<em><code>0.0</code></em>, <strong><code>y_range</code></strong>:<code>Optional</code>[<code>tuple</code>]=<em><code>None</code></em>, <strong><code>verbose</code></strong>:<code>bool</code>=<em><code>False</code></em>, <strong>**<code>kwargs</code></strong>) :: <code>Module</code></p>
</blockquote>
<p>Same as <code>nn.Module</code>, but no need for subclasses to call <code>super().__init__</code></p>

</div>

</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="n">bs</span> <span class="o">=</span> <span class="mi">32</span>
<span class="n">c_in</span> <span class="o">=</span> <span class="mi">9</span>  <span class="c1"># aka channels, features, variables, dimensions</span>
<span class="n">c_out</span> <span class="o">=</span> <span class="mi">2</span>
<span class="n">seq_len</span> <span class="o">=</span> <span class="mi">5000</span>

<span class="n">xb</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="n">bs</span><span class="p">,</span> <span class="n">c_in</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>

<span class="c1"># standardize by channel by_var based on the training set</span>
<span class="n">xb</span> <span class="o">=</span> <span class="p">(</span><span class="n">xb</span> <span class="o">-</span> <span class="n">xb</span><span class="o">.</span><span class="n">mean</span><span class="p">((</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">keepdim</span><span class="o">=</span><span class="kc">True</span><span class="p">))</span> <span class="o">/</span> <span class="n">xb</span><span class="o">.</span><span class="n">std</span><span class="p">((</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">keepdim</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>

<span class="c1"># Settings</span>
<span class="n">max_seq_len</span> <span class="o">=</span> <span class="mi">256</span>
<span class="n">d_model</span> <span class="o">=</span> <span class="mi">128</span>
<span class="n">n_heads</span> <span class="o">=</span> <span class="mi">16</span>
<span class="n">d_k</span> <span class="o">=</span> <span class="n">d_v</span> <span class="o">=</span> <span class="kc">None</span> <span class="c1"># if None --&gt; d_model // n_heads</span>
<span class="n">d_ff</span> <span class="o">=</span> <span class="mi">256</span>
<span class="n">dropout</span> <span class="o">=</span> <span class="mf">0.1</span>
<span class="n">activation</span> <span class="o">=</span> <span class="s2">&quot;gelu&quot;</span>
<span class="n">n_layers</span> <span class="o">=</span> <span class="mi">3</span>
<span class="n">fc_dropout</span> <span class="o">=</span> <span class="mf">0.1</span>
<span class="n">kwargs</span> <span class="o">=</span> <span class="p">{}</span>

<span class="n">model</span> <span class="o">=</span> <span class="n">TST</span><span class="p">(</span><span class="n">c_in</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">,</span> <span class="n">max_seq_len</span><span class="o">=</span><span class="n">max_seq_len</span><span class="p">,</span> <span class="n">d_model</span><span class="o">=</span><span class="n">d_model</span><span class="p">,</span> <span class="n">n_heads</span><span class="o">=</span><span class="n">n_heads</span><span class="p">,</span>
            <span class="n">d_k</span><span class="o">=</span><span class="n">d_k</span><span class="p">,</span> <span class="n">d_v</span><span class="o">=</span><span class="n">d_v</span><span class="p">,</span> <span class="n">d_ff</span><span class="o">=</span><span class="n">d_ff</span><span class="p">,</span> <span class="n">dropout</span><span class="o">=</span><span class="n">dropout</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="n">activation</span><span class="p">,</span> <span class="n">n_layers</span><span class="o">=</span><span class="n">n_layers</span><span class="p">,</span>
            <span class="n">fc_dropout</span><span class="o">=</span><span class="n">fc_dropout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="n">test_eq</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xb</span><span class="o">.</span><span class="n">device</span><span class="p">)(</span><span class="n">xb</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="p">[</span><span class="n">bs</span><span class="p">,</span> <span class="n">c_out</span><span class="p">])</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">&#39;model parameters: </span><span class="si">{</span><span class="n">count_parameters</span><span class="p">(</span><span class="n">model</span><span class="p">)</span><span class="si">}</span><span class="s1">&#39;</span><span class="p">)</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">

<div class="output_subarea output_stream output_stdout output_text">
<pre>model parameters: 517378
</pre>
</div>
</div>

</div>
</div>

</div>
    {% endraw %}

    {% raw %}
    
<div class="cell border-box-sizing code_cell rendered">
<div class="input">

<div class="inner_cell">
    <div class="input_area">
<div class=" highlight hl-ipython3"><pre><span></span><span class="n">bs</span> <span class="o">=</span> <span class="mi">32</span>
<span class="n">c_in</span> <span class="o">=</span> <span class="mi">9</span>  <span class="c1"># aka channels, features, variables, dimensions</span>
<span class="n">c_out</span> <span class="o">=</span> <span class="mi">2</span>
<span class="n">seq_len</span> <span class="o">=</span> <span class="mi">60</span>

<span class="n">xb</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="n">bs</span><span class="p">,</span> <span class="n">c_in</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>

<span class="c1"># standardize by channel by_var based on the training set</span>
<span class="n">xb</span> <span class="o">=</span> <span class="p">(</span><span class="n">xb</span> <span class="o">-</span> <span class="n">xb</span><span class="o">.</span><span class="n">mean</span><span class="p">((</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">keepdim</span><span class="o">=</span><span class="kc">True</span><span class="p">))</span> <span class="o">/</span> <span class="n">xb</span><span class="o">.</span><span class="n">std</span><span class="p">((</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">keepdim</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>

<span class="c1"># Settings</span>
<span class="n">max_seq_len</span> <span class="o">=</span> <span class="mi">120</span>
<span class="n">d_model</span> <span class="o">=</span> <span class="mi">128</span>
<span class="n">n_heads</span> <span class="o">=</span> <span class="mi">16</span>
<span class="n">d_k</span> <span class="o">=</span> <span class="n">d_v</span> <span class="o">=</span> <span class="kc">None</span> <span class="c1"># if None --&gt; d_model // n_heads</span>
<span class="n">d_ff</span> <span class="o">=</span> <span class="mi">256</span>
<span class="n">dropout</span> <span class="o">=</span> <span class="mf">0.1</span>
<span class="n">act</span> <span class="o">=</span> <span class="s2">&quot;gelu&quot;</span>
<span class="n">n_layers</span> <span class="o">=</span> <span class="mi">3</span>
<span class="n">fc_dropout</span> <span class="o">=</span> <span class="mf">0.1</span>
<span class="n">kwargs</span> <span class="o">=</span> <span class="p">{}</span>
<span class="c1"># kwargs = dict(kernel_size=5, padding=2)</span>

<span class="n">model</span> <span class="o">=</span> <span class="n">TST</span><span class="p">(</span><span class="n">c_in</span><span class="p">,</span> <span class="n">c_out</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">,</span> <span class="n">max_seq_len</span><span class="o">=</span><span class="n">max_seq_len</span><span class="p">,</span> <span class="n">d_model</span><span class="o">=</span><span class="n">d_model</span><span class="p">,</span> <span class="n">n_heads</span><span class="o">=</span><span class="n">n_heads</span><span class="p">,</span>
            <span class="n">d_k</span><span class="o">=</span><span class="n">d_k</span><span class="p">,</span> <span class="n">d_v</span><span class="o">=</span><span class="n">d_v</span><span class="p">,</span> <span class="n">d_ff</span><span class="o">=</span><span class="n">d_ff</span><span class="p">,</span> <span class="n">dropout</span><span class="o">=</span><span class="n">dropout</span><span class="p">,</span> <span class="n">act</span><span class="o">=</span><span class="n">act</span><span class="p">,</span> <span class="n">n_layers</span><span class="o">=</span><span class="n">n_layers</span><span class="p">,</span>
            <span class="n">fc_dropout</span><span class="o">=</span><span class="n">fc_dropout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="n">test_eq</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xb</span><span class="o">.</span><span class="n">device</span><span class="p">)(</span><span class="n">xb</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="p">[</span><span class="n">bs</span><span class="p">,</span> <span class="n">c_out</span><span class="p">])</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">&#39;model parameters: </span><span class="si">{</span><span class="n">count_parameters</span><span class="p">(</span><span class="n">model</span><span class="p">)</span><span class="si">}</span><span class="s1">&#39;</span><span class="p">)</span>
</pre></div>

    </div>
</div>
</div>

<div class="output_wrapper">
<div class="output">

<div class="output_area">

<div class="output_subarea output_stream output_stdout output_text">
<pre>model parameters: 420226
</pre>
</div>
</div>

</div>
</div>

</div>
    {% endraw %}

</div>
 

