---
layout: post
title: Tensor train
date: '2015-05-11T15:43:00.001-07:00'
author: Alex Rogozhnikov
tags:
- Machine Learning
- Graphical Models
modified_time: '2015-05-11T15:48:30.063-07:00'
blogger_id: tag:blogger.com,1999:blog-307916792578626510.post-7279170098441406116
blogger_orig_url: http://brilliantlywrong.blogspot.com/2015/05/tensor-train.html
---

<img src="https://mdnip.files.wordpress.com/2012/09/tensortrain.jpg?w=500&amp;crop=1" width="600" style="margin: 0px 100px;"/>

<p>
    One of probable approaches to build graphical models with categorical variables is <a href="http://bayesgroup.ru/wp-content/uploads/2014/05/icml2014_NROV-1.pdf">tensor decomposition</a>.
</p>
<p>
    Notably both tensor decomposition (tensor train format) and method to use it for graphical models were developed at my faculty, though by different people at different chairs.
</p>
<p>
    One more interesting question is interpretation of those hidden variables emerging in the middle.
</p>
<p>
    At this moment I'm thinking over possibility to build this into GB-train, since trained model for each event provides sequence of binary visible variables. In principle, this may be written in quite simple way. Provided, that $x_i$ is boolean variable corresponding to $i$th cut in the tree (in the train, to be more precise).
</p>
<p>
    For instance, one can write partition function as $$ Z = A_1[x_1, y] A_2[x_2, y] ... A_n[x_n, y] $$ or as $$ Z = A_1[x_1] B_1[y] A_2[x_2] B_2[y] ... A_n[x_n] B_n[y] $$
</p>
<p>
    In both cases it's quite simple to estimate posterior probability since we have only limited set of options (targets)
    to be checked. But which one should be preferred and why — it is an open question for me.
</p>