{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "# Evolutionnary Hierarchical Dirichlet Processes for Multiple Correlated Time Varying Corpora"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----------------\n",
    "\n",
    "Le notebook suivant est l'implémentation du code de l'article EvoHDP, réalisé par J.Zhang,Y.Song & al et est testé : <br\\>\n",
    "- sur les données synthétiques indiqués par l'article\n",
    "- sur des courts documents ayant un thème particulier \n",
    "- sur des résumés de la séries Game Of Thrones <br\\>\n",
    "\n",
    "L'article est accessible grâce au lien suivant : \n",
    "<br\\>\n",
    "http://www.shixialiu.com/publications/evohdp/paper.pdf\n",
    "<br\\> <br\\> \n",
    "Les détails et rappels mathématiques sont donnés au fur et à mesure de la rédaction du code\n",
    "\n",
    "Les indications (telles que \"voir Table x\" ou \"voir (xx)\") font référence à l'article\n",
    "\n",
    "La plupart des cellules terminent par une ligne de code test de la fonction implémentée juste au dessus et peuvent être dé-commentées pour comprendre le modèle \"step by step\".\n",
    "\n",
    "-----------------"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from scipy.stats import multinomial\n",
    "from scipy.special import gammaln\n",
    "import copy\n",
    "import math\n",
    "import mpmath\n",
    "import os \n",
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "import time\n",
    "from tqdm import tqdm\n",
    "import random\n",
    "import pandas as pd \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# les données sont organisées sous cette forme : data=[T][J][[doc_t_j_1],[doc_t_j_2],...]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Experiments on small documents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "stop_words=['l','d','s','de','un', 'une','alors','au','aucuns','aussi','autre','avant','avec','avoir','bon','car','ce','cela','ces','ceux','chaque','ci','comme','comment','dans','des','du','dedans','dehors'\n",
    "    ,'depuis','devrait','doit','donc','dos','début','elle','elles','en','encore','essai','est','et','eu','fait','faites','fois','font','hors','ici','il','ils','je','juste'\n",
    "    ,'la','le','les','leur','là','ma','maintenant','mais','mes','mine','moins','mon','mot','même','ni','nommés','notre','nous','ou','où','par','parce','pas','peut','peu','plupart','pour','pourquoi','quand','que','quel','quelle','quelles','quels','qui','sa','sans','ses','seulement'\n",
    "    ,'si','sien','son','sont','sous','soyez','sujet','sur','ta','tandis','tellement','tels','tes','ton','tous','tout','trop','très','tu','voient','vont','votre','vous','vu','ça','étaient','état','étions','été','être',\"a\",\"abord\",\"absolument\",\"afin\",\"ah\",\"ai\",\"aie\",\"aient\",\"aies\"\n",
    "    ,\"ailleurs\",\"ainsi\",\"ait\",\"allaient\",\"allo\",\"allons\",\"allô\",\"alors\",\"anterieur\",\"anterieure\",\"anterieures\",\"apres\",\"après\",\"as\",\"assez\",\"attendu\",\"au\",\"aucun\",\"aucune\",\"aucuns\",\"aujourd\",\"aujourd'hui\",\"aupres\",\"auquel\",\"aura\",\"aurai\",\"auraient\",\"aurais\",\"aurait\",\"auras\"\n",
    "    ,\"aurez\",\"auriez\",\"aurions\",\"aurons\",\"auront\",\"aussi\",\"autre\",\"autrefois\",\"autrement\",\"autres\",\"autrui\",\"aux\",\"auxquelles\",\"auxquels\",\"avaient\",\"avais\",\"avait\",\"avant\",\"avec\",\"avez\",\"aviez\",\"avions\",\"avoir\",\"avons\",\"ayant\",\"ayez\",\"ayons\",\"b\",\"bah\",\"bas\",\"basee\",\"bat\"\n",
    "    ,\"beau\",\"beaucoup\",\"bien\",\"bigre\",\"bon\",\"boum\",\"bravo\",\"brrr\",\"c\",\"car\",\"ce\",\"ceci\",\"cela\",\"celle\",\"celle-ci\",\"celle-là\",\"celles\",\"celles-ci\",\"celles-là\",\"celui\",\"celui-ci\",\"celui-là\",\"celà\",\"cent\",\"cependant\",\"certain\",\"certaine\",\"certaines\",\"certains\",\"certes\",\"ces\",\"cet\"\n",
    "    ,\"cette\",\"ceux\",\"ceux-ci\",\"ceux-là\",\"chacun\",\"chacune\",\"chaque\",\"cher\",\"chers\",\"chez\",\"chiche\",\"chut\",\"chère\",\"chères\",\"ci\",\"cinq\",\"cinquantaine\",\"cinquante\",\"cinquantième\",\"cinquième\",\"clac\",\"clic\",\"combien\",\"comme\",\"comment\",\"comparable\",\"comparables\",\"compris\",\"concernant\"\n",
    "    ,\"contre\",\"couic\",\"crac\",\"d\",\"da\",\"dans\",\"de\",\"debout\",\"dedans\",\"dehors\",\"deja\",\"delà\",\"depuis\",\"dernier\",\"derniere\",\"derriere\",\"derrière\",\"des\",\"desormais\",\"desquelles\",\"desquels\",\"dessous\",\"dessus\",\"deux\",\"deuxième\",\"deuxièmement\",\"devant\",\"devers\",\"devra\",\"devrait\",\"different\"\n",
    "    ,\"differentes\",\"differents\",\"différent\",\"différente\",\"différentes\",\"différents\",\"dire\",\"directe\",\"directement\",\"dit\",\"dite\",\"dits\",\"divers\",\"diverse\",\"diverses\",\"dix\",\"dix-huit\",\"dix-neuf\",\"dix-sept\",\"dixième\",\"doit\",\"doivent\",\"donc\",\"dont\",\"dos\",\"douze\",\"douzième\",\"dring\",\"droite\"\n",
    "    ,\"du\",\"duquel\",\"durant\",\"dès\",\"début\",\"désormais\",\"e\",\"effet\",\"egale\",\"egalement\",\"egales\",\"eh\",\"elle\",\"elle-même\",\"elles\",\"elles-mêmes\",\"en\",\"encore\",\"enfin\",\"entre\",\"envers\",\"environ\",\"es\",\"essai\",\"est\",\"et\",\"etant\",\"etc\",\"etre\",\"eu\",\"eue\",\"eues\",\"euh\",\"eurent\",\"eus\",\"eusse\"\n",
    "    ,\"eussent\",\"eusses\",\"eussiez\",\"eussions\",\"eut\",\"eux\",\"eux-mêmes\",\"exactement\",\"excepté\",\"extenso\",\"exterieur\",\"eûmes\",\"eût\",\"eûtes\",\"f\",\"fais\",\"faisaient\",\"faisant\",\"fait\",\"faites\",\"façon\",\"feront\",\"fi\",\"flac\",\"floc\",\"fois\",\"font\",\"force\",\"furent\",\"fus\",\"fusse\",\"fussent\",\"fusses\",\"fussiez\"\n",
    "    ,\"fussions\",\"fut\",\"fûmes\",\"fût\",\"fûtes\",\"g\",\"gens\",\"h\",\"ha\",\"haut\",\"hein\",\"hem\",\"hep\",\"hi\",\"ho\",\"holà\",\"hop\",\"hormis\",\"hors\",\"hou\",\"houp\",\"hue\",\"hui\",\"huit\",\"huitième\",\"hum\",\"hurrah\",\"hé\",\"hélas\"\n",
    "    ,\"i\",\"ici\",\"il\",\"ils\",\"importe\",\"j\",\"je\",\"jusqu\",\"jusque\",\"juste\",\"k\",\"l\",\"la\",\"laisser\",\"laquelle\",\"las\",\"le\",\"lequel\",\"les\",\"lesquelles\",\"lesquels\",\"leur\",\"leurs\",\"longtemps\",\"lors\",\"lorsque\",\"lui\",\"lui-meme\",\"lui-même\",\"là\",\"lès\",\"m\",\"ma\",\"maint\",\"maintenant\",\"mais\",\"malgre\",\"malgré\",\"maximale\",\"me\",\"meme\",\"memes\",\"merci\",\"mes\",\"mien\",\"mienne\",\"miennes\",\"miens\",\"mille\",\"mince\",\"mine\",\"minimale\",\"moi\",\"moi-meme\",\"moi-même\",\"moindres\",\"moins\",\"mon\",\"mot\",\"moyennant\",\"multiple\",\"multiples\",\"même\",\"mêmes\",\"n\",\"na\",\"naturel\",\"naturelle\",\"naturelles\",\"ne\",\"neanmoins\",\"necessaire\",\"necessairement\",\"neuf\",\"neuvième\",\"ni\",\"nombreuses\",\"nombreux\",\"nommés\",\"nos\",\"notamment\",\"notre\",\"nous\",\"nous-mêmes\",\"nouveau\",\"nouveaux\",\"nul\",\"néanmoins\",\"nôtre\",\"nôtres\",\"o\",\"oh\",\"ohé\",\"ollé\",\"olé\",\"on\",\"ont\",\"onze\",\"onzième\",\"ore\",\"ou\",\"ouf\",\"ouias\",\"oust\",\"ouste\",\"outre\",\"ouvert\",\"ouverte\",\"ouverts\",\"o|\",\"où\",\"p\",\"paf\",\"pan\",\"par\",\"parce\",\"parfois\",\"parle\",\"parlent\",\"parler\",\"parmi\",\"parole\",\"parseme\",\"partant\",\"particulier\",\"particulière\",\"particulièrement\",\"pas\",\"passé\",\"pendant\",\"pense\",\"permet\",\"personne\",\"personnes\",\"peu\",\"peut\",\"peuvent\",\"peux\",\"pff\",\"pfft\",\"pfut\",\"pif\",\"pire\",\"pièce\",\"plein\",\"plouf\",\"plupart\",\"plus\",\"plusieurs\",\"plutôt\",\"possessif\",\"possessifs\",\"possible\",\"possibles\",\"pouah\",\"pour\",\"pourquoi\",\"pourrais\",\"pourrait\",\"pouvait\",\"prealable\",\"precisement\",\"premier\",\"première\",\"premièrement\",\"pres\",\"probable\",\"probante\",\"procedant\",\"proche\",\"près\",\"psitt\",\"pu\",\"puis\",\"puisque\",\"pur\",\"pure\",\"q\",\"qu\",\"quand\",\"quant\",\"quant-à-soi\",\"quanta\",\"quarante\",\"quatorze\",\"quatre\",\"quatre-vingt\",\"quatrième\",\"quatrièmement\",\"que\",\"quel\",\"quelconque\",\"quelle\",\"quelles\",\"quelqu'un\",\"quelque\",\"quelques\",\"quels\",\"qui\",\"quiconque\",\"quinze\",\"quoi\",\"quoique\",\"r\",\"rare\",\"rarement\",\"rares\",\"relative\",\"relativement\",\"remarquable\",\"rend\",\"rendre\",\"restant\",\"reste\",\"restent\",\"restrictif\",\"retour\",\"revoici\",\"revoilà\",\"rien\",\"s\",\"sa\",\"sacrebleu\",\"sait\",\"sans\",\"sapristi\",\"sauf\",\"se\",\"sein\",\"seize\",\"selon\",\"semblable\",\"semblaient\",\"semble\",\"semblent\",\"sent\",\"sept\",\"septième\",\"sera\",\"serai\",\"seraient\",\"serais\",\"serait\",\"seras\",\"serez\",\"seriez\",\"serions\",\"serons\",\"seront\",\"ses\",\"seul\",\"seule\",\"seulement\",\"si\",\"sien\",\"sienne\",\"siennes\",\"siens\",\"sinon\",\"six\",\"sixième\",\"soi\",\"soi-même\",\"soient\",\"sois\",\"soit\",\"soixante\",\"sommes\",\"son\",\"sont\",\"sous\",\"souvent\",\"soyez\",\"soyons\",\"specifique\",\"specifiques\",\"speculatif\",\"stop\",\"strictement\",\"subtiles\",\"suffisant\",\"suffisante\",\"suffit\",\"suis\",\"suit\",\"suivant\",\"suivante\",\"suivantes\",\"suivants\",\"suivre\",\"sujet\",\"superpose\",\"sur\",\"surtout\",\"t\",\"ta\",\"tac\",\"tandis\",\"tant\",\"tardive\",\"te\",\"tel\",\"telle\",\"tellement\",\"telles\",\"tels\",\"tenant\",\"tend\",\"tenir\",\"tente\",\"tes\",\"tic\",\"tien\",\"tienne\",\"tiennes\",\"tiens\",\"toc\",\"toi\",\"toi-même\",\"ton\",\"touchant\"\n",
    "    ,\"toujours\",\"tous\",\"tout\",\"toute\",\"toutefois\",\"toutes\",\"treize\",\"trente\",\"tres\",\"trois\",\"troisième\",\"troisièmement\",\"trop\",\"très\",\"tsoin\",\"tsouin\",\"tu\",\"té\",\"u\",\"un\",\"une\",\"unes\",\"uniformement\",\"unique\",\"uniques\",\"uns\",\"v\",\"va\",\"vais\",\"valeur\",\"vas\",\"vers\",\"via\",\"vif\",\"vifs\",\"vingt\",\"vivat\",\"vive\",\"vives\",\"vlan\",\"voici\",\"voie\",\"voient\",\"voilà\",\"vont\",\"vos\",\"votre\",\"vous\",\"vous-mêmes\",\"vu\",\"vé\",\"vôtre\",\"vôtres\",\"w\",\"x\",\"y\",\"z\",\"zut\",\"à\",\"â\",\"ça\",\"ès\",\"étaient\",\"étais\",\"était\",\"étant\",\"état\",\"étiez\",\"étions\",\"été\",\"étée\",\"étées\",\"étés\",\"êtes\",\"être\",\"ô\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def generate_easy_data(stop_words):\n",
    "    path_doc='Test_musique_sport/ex_simple'\n",
    "    doc_file='Test_musique_sport/name_basket'\n",
    "    filename=open(doc_file,'r').readlines()\n",
    "    filename = [os.path.join(path_doc,filename[i].replace('\\n','')) for i in range(len(filename))]\n",
    "   \n",
    "    vectorizer=CountVectorizer(input='filename',max_df=0.9,stop_words=stop_words)\n",
    "    tf=vectorizer.fit_transform(filename).todense() #tf for documents\n",
    "    name_word=vectorizer.get_feature_names()\n",
    "    d11=[tf[4,:].tolist()[0],tf[5,:].tolist()[0]]\n",
    "    d12=[tf[6,:].tolist()[0],tf[7,:].tolist()[0]]\n",
    "    d21=[tf[10,:].tolist()[0],tf[11,:].tolist()[0]]\n",
    "    d22=[tf[8,:].tolist()[0],tf[9,:].tolist()[0]]\n",
    "    d31=[tf[0,:].tolist()[0],tf[1,:].tolist()[0]]\n",
    "    d32=[tf[2,:].tolist()[0],tf[3,:].tolist()[0]]\n",
    "    data=[[d11,d12],[d21,d22],[d31,d32]]\n",
    "    T=len(data)\n",
    "    J=len(data[0])\n",
    "    W=len(data[0][0][0])\n",
    "    return(data,T,J,W,name_word)\n",
    "\n",
    "#data,T,J,W,name_word=generate_easy_data(stop_words)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Experiments on GOT documents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def generate_GOT_data(stop_words):\n",
    "    path_doc='Test_GOT/txt_GOT'\n",
    "    doc_file='Test_GOT/name_GOT_txt'\n",
    "    filename=open(doc_file,'r').readlines()\n",
    "    filename = [os.path.join(path_doc,filename[i].replace('\\n','')) for i in range(len(filename))]\n",
    "    \n",
    "    vectorizer=CountVectorizer(input='filename',max_df=0.9,stop_words=stop_words)\n",
    "    tf=vectorizer.fit_transform(filename).todense() #tf for documents\n",
    "    name_word=vectorizer.get_feature_names()\n",
    "    data=[]\n",
    "    name_doc=[]\n",
    "    for i in range(6):\n",
    "        docs_tempsi=[]\n",
    "        name_tempsi=[]\n",
    "        for j in range(10):\n",
    "            docs_tempsi.append(tf[i*10+j,:].tolist()[0])\n",
    "            name_tempsi.append(filename[i*10 + j])\n",
    "        data.append([docs_tempsi])\n",
    "        name_doc.append(name_tempsi)\n",
    "    docs_tempsi=[]\n",
    "    name_tempsi=[]\n",
    "    for j in range(7):\n",
    "        docs_tempsi.append(tf[6*10+j,:].tolist()[0])\n",
    "        name_tempsi.append(filename[6*10 + j])\n",
    "    data.append([docs_tempsi])\n",
    "    name_doc.append(name_tempsi)\n",
    "    T=len(data)\n",
    "    J=len(data[0])\n",
    "    W=len(data[0][0][0])\n",
    "    return(data,name_doc,T,J,W,name_word)\n",
    "\n",
    "#data,name_doc,T,J,W,name_word=generate_GOT_data(stop_words)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Experiments on synthetic data "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Les données synthétiques sont une mixture de multinomiales, de paramètres $\\phi_k$ indiqué en Table 1 et repris ci-dessous.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "true_phi=np.zeros((8,2))\n",
    "true_phi[0]=[0.1,0.9]\n",
    "true_phi[1]=[0.2,0.8]\n",
    "true_phi[2]=[0.3,0.7]\n",
    "true_phi[3]=[0.4,0.6]\n",
    "true_phi[4]=[0.5,0.5]\n",
    "true_phi[5]=[0.6,0.4]\n",
    "true_phi[6]=[0.7,0.3]\n",
    "true_phi[7]=[0.8,0.2]\n",
    "T=4\n",
    "J=3\n",
    "W=2\n",
    "K=40"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# On crée un liste Info_data_sample=[T][J][local_components,size_corpora]\n",
    "corpora_sizes=[[500,300,400],[510,320,430],[520,320,430],[530,340,450]]\n",
    "def local_components_and_corpora_sizes(T,J,corpora_sizes):\n",
    "    info_data=[]\n",
    "    for t in range(T):\n",
    "        info_data_t=[]\n",
    "        for j in range(J):\n",
    "            info_data_j=[]\n",
    "            for k in range(3):\n",
    "                info_data_j.append(j+k+t)\n",
    "            info_data_j.append(corpora_sizes[t][j])\n",
    "            info_data_t.append(info_data_j)\n",
    "        info_data.append(info_data_t)   \n",
    "    return(info_data)\n",
    "#info_data=local_components_and_corpora_sizes(T,J,corpora_sizes)\n",
    "#info_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#data=[T][J][[doc_t_j_1],[doc_t_j_2],...]\n",
    "def mixture_of_three_multinomial(liste_of_phi_indices,true_phi,corpora_size,z):\n",
    "    \n",
    "    mult1=np.random.multinomial(200,true_phi[liste_of_phi_indices[0]],size=z[0]).tolist()\n",
    "    mult2=np.random.multinomial(200,true_phi[liste_of_phi_indices[1]],size=z[1]).tolist()\n",
    "    mult3=np.random.multinomial(200,true_phi[liste_of_phi_indices[2]],size=z[2]).tolist()\n",
    "    mixt_mult_float=np.concatenate((np.concatenate((mult1,mult2),axis=0),mult3),axis=0)\n",
    "    #print(mixt_mult_float)\n",
    "    mixt_mult_int=[[mixt_mult_float[t][j].tolist() for j in range(len(mixt_mult_float[t]))] for t in range(len(mixt_mult_float))]\n",
    "    return(mixt_mult_int)\n",
    "\n",
    "def generate_data_from_mixture_of_multinomials(T,J,info_data,true_phi):\n",
    "    data=[]\n",
    "    for t in range(T):\n",
    "        data_t=[]\n",
    "        for j in range(J):\n",
    "            z=np.random.multinomial(info_data[t][j][3],[1/3,1/3,1/3])           \n",
    "            doc_t_j=mixture_of_three_multinomial(info_data[t][j],true_phi,info_data[t][j][3],z)\n",
    "            data_t.append(doc_t_j)\n",
    "        #data_t.append(data_j)\n",
    "        data.append(data_t)\n",
    "    return(data)\n",
    "#data=generate_data_from_mixture_of_multinomials(T,J,info_data,true_phi)\n",
    "#data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "L'objectif de cette expérimentation est de retrouver les \"true_phi\" par l'algorithme EvoHDP. <br\\>Ces \"true_phi\" ont été utilisé pour générer nos données. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Initialize hyper parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pour l'initialisation, le modèle est celui d'un HDP à trois niveaux :\n",
    "\n",
    "$$ H \\sim Dir ( 1/W) $$ \n",
    "$$ G \\sim DP(\\xi , H) $$\n",
    "\n",
    "Pour chaque temps :\n",
    "\n",
    "$$ \\forall t \\in T $$\n",
    "$$ G_{0}^t \\sim DP(\\gamma^t , G) $$\n",
    "\n",
    "Pour chaque corpus : \n",
    "\n",
    "$$ \\forall j \\in J $$\n",
    "$$ G_{j}^t \\sim DP(\\alpha_{0}^t , G_{0}^t) $$\n",
    "\n",
    "\n",
    "On doit simuler pour l'inititalisation des paramètres :\n",
    "$$ \\xi \\sim Gamma(10,1) $$ \n",
    "Pour chaque temps : \n",
    "$$ \\forall t \\in T $$\n",
    "$$ \\eta^t \\sim Gamma(10,1) $$\n",
    "$$ \\alpha_{0}^t \\sim Gamma(10,1) $$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Initialize parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$ \\xi \\sim Gamma(10,1) $$ "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "a_xi=10\n",
    "b_xi=1\n",
    "xi=np.random.gamma(a_xi,b_xi)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pour chaque temps : \n",
    "$$ \\forall t \\in T $$\n",
    "$$ \\gamma^t \\sim Gamma(10,1) $$\n",
    "$$ \\alpha_{0}^t \\sim Gamma(10,1) $$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "T=4\n",
    "a_gamma=10\n",
    "b_gamma=1\n",
    "a_alpha=10\n",
    "b_alpha=1\n",
    "\n",
    "gamma=[np.random.gamma(a_gamma,b_gamma) for i in range(T)]\n",
    "alpha=[np.random.gamma(a_alpha,b_alpha) for i in range(T)]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "On crée les time dependencies $v^t=w^t=a$ avec $a \\in {0.1,0.3,0.5,0.7,0.9}$ et nous étudierons l'impact de cette variable. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#ici a=0.8\n",
    "v=T*[K*[0.8]]\n",
    "w=T*[0.8]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Generate from stick breaking for initilization of measures"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$ H \\sim Dir ( 1/W) $$ \n",
    "$$ G \\sim DP(\\xi , H) $$\n",
    "\n",
    "$$ G = \\sum_{k=1}^{\\infty} \\nu_k \\delta_{\\phi_k} $$\n",
    "où : \n",
    "$$ \\nu \\sim GEM(\\xi) $$ et: $$ \\phi_k \\sim H $$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def stick_breaking(alpha, k, size_W):\n",
    "    if(alpha < 0): return(\"alpha must be positive\")\n",
    "    betas = np.random.beta(1, alpha, k)\n",
    "    produit_1_beta = np.append(1, np.cumprod(1 - betas[:-1]))\n",
    "    p = betas * produit_1_beta\n",
    "    return(p/p.sum())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#nu=stick_breaking(xi,K,W)\n",
    "#nu"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$ G = \\sum_{k=1}^{\\infty} \\nu_k \\delta_{\\phi_k} $$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Une fois G simulé, on sait que :\n",
    "$$ G_{0}^t = Dir( \\gamma^{t} , G) $$\n",
    "D'après l'approche stick breaking :\n",
    "$$ G_{0}^t = \\sum_{k=1}^\\infty \\beta_{k}^t\\delta_{\\phi_k} $$\n",
    "où\n",
    "$$ \\beta^t \\sim DP(\\gamma^t,\\hat{\\beta}^t)$$$$\\hat{\\beta}^t=w^t\\beta^{t-1}+(1-w^t)\\nu$$$$  \\nu\\sim GEM(\\xi) $$ \n",
    "Pour simuler $ G_{0}^t$, on défini les deux propriétés suivantes : <br/> <br/>\n",
    "I. D'après la **propriété de normalisation**  d'un processus de Dirichlet: <br/><br/>\n",
    "**Si** $$ (X_1,...,X_d)\\sim Dirichlet(\\alpha_1,...,\\alpha_d) $$\n",
    "**Alors, pour k $\\leq$ d**\n",
    "$$ \\dfrac{(X_1,...,X_k)}{\\sum_{i\\leq k}X_i} \\sim Dirichlet(\\alpha_1,...,\\alpha_k) $$\n",
    "<br/>\n",
    "II. Lien entre **la loi de Dirichlet et la loi Beta**:<br/><br/>\n",
    "**Si** $$(X_1,...,X_d)\\sim Dir(\\alpha_1,...,\\alpha_d)$$ \n",
    "**Alors** $$\\forall i \\in [1,d],\n",
    "X_i \\sim Beta(\\alpha_i,\\alpha-\\alpha_i),\\alpha=\\sum_{j=1}^d\\alpha_j$$ <br/><br/>\n",
    "\n",
    "Ainsi on a : <br/>\n",
    "$$ \\frac{\\beta_k^t}{1-\\sum_{i<k}\\beta_i^t} \\sim Beta(\\gamma^t\\hat{\\beta}_k,\\gamma^t(1-\\sum_{i\\leq k}\\hat{\\beta}_i))$$<br/>\n",
    "Stick-Breaking donne :\n",
    "$$ \\tilde{\\beta_k^t}/\\hat{\\beta}_1,...,\\hat{\\beta}_k  \\sim Beta(\\gamma^t\\hat{\\beta}_k,\\gamma^t(1-\\sum_{i\\leq k}\\hat{\\beta}_i)) $$ \n",
    "$$ \\beta_k^t=\\tilde{\\beta_k^t}\\prod_{i<k}(1-\\tilde{\\beta_i^t})$$<br\\><br\\>\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def dirichlet_generate_random(params_dirich):\n",
    "    if(type(params_dirich)==list):\n",
    "        params_dirich=np.array(params_dirich)\n",
    "    liste_indice_non_zero=np.nonzero(params_dirich)\n",
    "    param_non_zero=params_dirich[params_dirich>0]\n",
    "    rand_dir=np.random.dirichlet(param_non_zero)\n",
    "    random_finale=np.zeros((len(params_dirich)))\n",
    "    random_finale[liste_indice_non_zero]=rand_dir\n",
    "    return(random_finale)\n",
    "\n",
    "def beta_generate_random(params_beta):\n",
    "    if(type(params_beta)==list):\n",
    "        params_beta=np.array(params_beta)\n",
    "    if(len(params_beta)!=2):\n",
    "        print(\"ERROR, la taille des paramètres pour la simluation d'une beta est supérieur à 2\")\n",
    "    if(params_beta[0]<=0):\n",
    "        return(1e-10)\n",
    "    if(params_beta[1]<=0):\n",
    "        return(1-1e-10)\n",
    "    random_final=np.random.beta(params_beta[0],params_beta[1])\n",
    "    if(random_final<=0):\n",
    "        random_final=1e-10\n",
    "    elif(random_final>=1) :\n",
    "        random_final=1-1e-10\n",
    "    return(random_final)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# voir (8)\n",
    "def initialize_G_0_t (gamma,nu,T,K,w):\n",
    "    G_0_t=[]\n",
    "    for t in range(T):\n",
    "        if(t==0):\n",
    "            beta_t=[]\n",
    "            beta_tilde_t=[]\n",
    "            for k in range(K):\n",
    "                params_dirich=[gamma[t]*nu[k],gamma[t]*(1-np.sum(nu[:k+1]))]\n",
    "                beta_tilde_k_t=beta_generate_random(params_dirich)\n",
    "                beta_tilde_t.append(beta_tilde_k_t)\n",
    "                beta_k_t=beta_tilde_k_t*np.product(1-np.array(beta_tilde_t[:k]))\n",
    "                beta_t.append(beta_k_t)\n",
    "            G_0_t.append((beta_t/np.sum(beta_t)).tolist())\n",
    "        else:        \n",
    "            beta_t=[]\n",
    "            beta_tilde_t=[]\n",
    "            beta_hat=w[t]*np.array(G_0_t[t-1])+(1-w[t])*nu\n",
    "            for k in range(K):\n",
    "                params_dirich=[gamma[t]*beta_hat[k],gamma[t]*(1-np.sum(beta_hat[:k+1]))]\n",
    "                beta_tilde_k_t=beta_generate_random(params_dirich)\n",
    "                beta_tilde_t.append(beta_tilde_k_t)\n",
    "                beta_k_t=beta_tilde_k_t*np.product(1-np.array(beta_tilde_t[:k]))\n",
    "                beta_t.append(beta_k_t)\n",
    "            G_0_t.append((beta_t/np.sum(beta_t)).tolist())\n",
    "    return(G_0_t)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#beta=initialize_G_0_t (gamma,nu,T,K,w)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Maintenant, on simule $G_j^t$ $$ \\forall t \\in T, \\forall j \\in J$$  \n",
    "$$ G_j^t=\\sum_{k=1}^{\\infty}\\pi_{jk}^t\\delta_{\\phi_k}$$ $$\\pi_j^t\\sim DP(\\alpha_0^t,\\hat{\\pi}^t_j)$$\n",
    "\n",
    "$$\\hat{\\pi}^t_j=v_j^t\\pi_j^{t-1}+(1-v_j^t)\\beta^t$$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#voir (9)\n",
    "def initialize_G_j_t (G_0_t,alpha,J,T,K,v):\n",
    "    G_j_T=[]\n",
    "    for t in range(T):\n",
    "        G_j_t=[]\n",
    "        if(t==0):\n",
    "            for j in range(J):\n",
    "                alpha_j_t=[]\n",
    "                alpha_tilde_j_t=[]\n",
    "                for k in range(K):\n",
    "                    params_dirich=[alpha[t]*G_0_t[t][k],alpha[t]*(1-np.sum(G_0_t[t][:k+1]))]\n",
    "                    alpha_tilde_k_t=beta_generate_random(params_dirich)\n",
    "                    alpha_tilde_j_t.append(alpha_tilde_k_t)\n",
    "                    alpha_k_t=alpha_tilde_k_t*np.product(1-np.array(alpha_tilde_j_t[:k]))\n",
    "                    alpha_j_t.append(alpha_k_t)     \n",
    "                G_j_t.append((alpha_j_t/(np.sum(alpha_j_t))).tolist())\n",
    "            G_j_T.append(G_j_t)\n",
    "        else:\n",
    "            for j in range(J):\n",
    "                alpha_j_t=[]\n",
    "                alpha_tilde_j_t=[]\n",
    "                alpha_hat=v[t][j]*np.array(G_j_T[t-1][j])+(1-v[t][j])*np.array(G_0_t[t][k])\n",
    "                for k in range(K):\n",
    "                    params_dirich=[alpha[t]*alpha_hat[k],alpha[t]*(1-np.sum(alpha_hat[:k+1]))]\n",
    "                    alpha_tilde_k_t=beta_generate_random(params_dirich)\n",
    "                    alpha_tilde_j_t.append(alpha_tilde_k_t)\n",
    "                    alpha_k_t=alpha_tilde_k_t*np.product(1-np.array(alpha_tilde_j_t[:k]))\n",
    "                    alpha_j_t.append(alpha_k_t)     \n",
    "                G_j_t.append((alpha_j_t/(np.sum(alpha_j_t))).tolist())\n",
    "            G_j_T.append(G_j_t)\n",
    "    return(G_j_T)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#pi=initialize_G_j_t(beta,alpha,J,T,K,v)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "On peut maintenant obtenir $n_{jk}^t$ qui est le nombre de documents du corpus j au temps t qui ont été assignés au topic k (i.e # $z_{ij}^t$ : $z_{ij}^t=k$) <br/> \n",
    "La fonction \"compute_n_t_j\" calcule $n_{jk}^t, \\forall k \\in K$ et retourne une liste de taille K "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def compute_n_t_j(K,liste_des_Z_temps_t_corpus_j):\n",
    "    n_t_j=[]\n",
    "    for k in range(K):\n",
    "        n_t_j.append(liste_des_Z_temps_t_corpus_j.count(k))\n",
    "    return(n_t_j)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Une fois qu'on a initialisé les $\\pi_{jk}^t$\n",
    ",on initialise randomly les Z\n",
    "\n",
    "La fonction \"compute_Z_j_t\" permet de calculer les probas normalisées d'un doc au temps t, pour le corpus j.<br/>\n",
    "La fonction \"compute_proba_z_i_j_t_is_k\" étend ce calcul à tous les temps et corpus.<br/>\n",
    "La fonction \"log_proba_mult\" n'est pas utilisée mais peut s'avérer utile pour éviter les arrondis.<br/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<br/> Pour l'initialisation, Z est calculé sans information à posteriori"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def randomly_assign_Z_initialisation(T,J,K,data):\n",
    "    Z=[]\n",
    "    n=[]\n",
    "    for t in range(T):\n",
    "        Z_t=[]\n",
    "        n_t=[]\n",
    "        for j in range(J):\n",
    "            Z_t_j=list(np.nonzero(np.random.multinomial(1,[1/K]*K,len(data[t][j])))[1])\n",
    "            Z_t.append(Z_t_j)\n",
    "            n_t.append(compute_n_t_j(K,Z_t_j))\n",
    "        Z.append(Z_t)\n",
    "        n.append(n_t)\n",
    "    return(Z,n)\n",
    "#Z,N=randomly_assign_Z_initialisation(T,J,K,data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "La fonction \"compute_T_jk_t_tplus1_et_T_jk_0_tplus1_multinomiale\" calule :   <br\\> <br\\> $$(T_{jk}^{t \\Rightarrow t+1},T_{jk}^{0 \\Rightarrow t+1}) \\sim Multinomiale (T_{jk}^{t+1},[p,1-p]),(22)$$  <br\\> avec $$p=\\frac{v_j^{t+1}\\pi_{jk}^t}{(1-v_j^{t+1})\\beta_k^{t+1} + v_j^{t+1}\\pi_{jk}^t} $$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def compute_T_jk_t_tplus1_et_T_jk_0_tplus1_multinomiale(T_jk_Tplus1,v_j_Tplus1,pi_jk_t,beta_k_Tplus1):\n",
    "    if(((1-v_j_Tplus1)*beta_k_Tplus1+v_j_Tplus1*pi_jk_t)!=0):\n",
    "        p=(v_j_Tplus1*pi_jk_t)/((1-v_j_Tplus1)*beta_k_Tplus1+v_j_Tplus1*pi_jk_t)\n",
    "    else:\n",
    "        p=0\n",
    "    T_jk_t_tplus1,T_jk_0_tplus1=np.random.multinomial(T_jk_Tplus1, [p,1-p])\n",
    "    return(T_jk_t_tplus1,T_jk_0_tplus1)\n",
    "\n",
    "def compute_M_jk_t_tplus1_et_M_jk_0_tplus1_multinomiale(M_k_Tplus1,w_Tplus1,beta_k_t,nu_k):\n",
    "    if(((1-w_Tplus1)*nu_k+w_Tplus1*beta_k_t)!=0):\n",
    "        q=(w_Tplus1*beta_k_t)/(((1-w_Tplus1)*nu_k)+(w_Tplus1*beta_k_t))\n",
    "    else:\n",
    "        q=0\n",
    "    M_k_t_tplus1,Mk_0_tplus1=np.random.multinomial(M_k_Tplus1, [q,1-q])\n",
    "    return(M_k_t_tplus1,Mk_0_tplus1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$ N_{jk}^t=n_{jk}^t+T_{jk}^{t \\Rightarrow t+1}$$\n",
    "$n_{jk}^t$ est le nombre de documents du corpus j assignés au topic k au temps t <br\\>\n",
    "$T_{jk}^{t \\Rightarrow t+1}$ est le nombre de tables qui ont été crées avec les menus du temps t. <br\\><br\\>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$ T_{jk}^t/\\beta_k^t,\\pi_{jk}^{t-1},N_{jk}^t\\sim   CRP   (\\alpha_0^t v_j^t\\pi_{jk}^{t-1} + \\alpha_0^t(1- v_j^t)\\beta_k^t,N_{jk}^t)$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Generate table assignments for `num_customers` customers, according to\n",
    "# a Chinese Restaurant Process with dispersion parameter `alpha`.\n",
    "def chinese_restaurant_process(num_customers, alpha):\n",
    "    if (num_customers <= 0 or alpha<0) :\n",
    "        return(0)\n",
    "    elif(alpha==0):\n",
    "        #print(\"alpha == 0\")\n",
    "        return(0)\n",
    "    else :\n",
    "        T_jk_t=0\n",
    "        for i in range(num_customers):        \n",
    "            if(np.random.rand()<alpha/(alpha+i)):\n",
    "                T_jk_t+=1\n",
    "    return(T_jk_t)\n",
    "T_jk_t=chinese_restaurant_process(100,15)   \n",
    "\n",
    "#num_customers=248\n",
    "#alpha_test=alpha[0]\n",
    "#T_jk_t=chinese_restaurant_process(num_customers,alpha_test)\n",
    "#T_jk_t"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "La fonction suivante retourne une liste (dim K) de listes (dim 3) contenant  : $$T_{jk}^{t\\Rightarrow t+1},T_{jk}^{0\\Rightarrow t+1},T_{jk}^t$$\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def compute_T_tP1_T_0t_Tjkt(temps0,tempsT,T_jk_ttp1,n_jk_t,v_j_t,pi_jk_T_moins1,beta_k_T,alpha_t):\n",
    "    T_3=[]\n",
    "    Nu=[]\n",
    "    if(temps0):\n",
    "        for k in range(len(n_jk_t)):\n",
    "            Nu_jk_t=n_jk_t[k]+T_jk_ttp1[k]\n",
    "            Nu.append(Nu_jk_t)\n",
    "            param_CRP=(alpha_t*beta_k_T[k])\n",
    "            T_0_jk=chinese_restaurant_process(Nu_jk_t,param_CRP)\n",
    "            T_3.append([0,T_0_jk,T_0_jk])\n",
    "    elif(tempsT):\n",
    "        for k in range(len(n_jk_t)):\n",
    "            Nu_jk_t=n_jk_t[k]\n",
    "            Nu.append(Nu_jk_t)\n",
    "            param_CRP=(alpha_t*v_j_t[k]*pi_jk_T_moins1[k])+(alpha_t*(1-v_j_t[k])*beta_k_T[k])\n",
    "            T_0_jk=chinese_restaurant_process(Nu_jk_t,param_CRP)\n",
    "            T_jk_t_tplus1,T_jk_0_tplus1=compute_T_jk_t_tplus1_et_T_jk_0_tplus1_multinomiale(T_0_jk,v_j_t[k],pi_jk_T_moins1[k],beta_k_T[k])\n",
    "            T_3.append([T_jk_t_tplus1,T_jk_0_tplus1,T_0_jk])\n",
    "    else:\n",
    "        for k in range(len(n_jk_t)):\n",
    "            Nu_jk_t=n_jk_t[k]+T_jk_ttp1[k]\n",
    "            Nu.append(Nu_jk_t)\n",
    "            param_CRP=(alpha_t*v_j_t[k]*pi_jk_T_moins1[k])+(alpha_t*(1-v_j_t[k])*beta_k_T[k])\n",
    "            T_0_jk=chinese_restaurant_process(Nu_jk_t,param_CRP)\n",
    "            T_jk_t_tplus1,T_jk_0_tplus1=compute_T_jk_t_tplus1_et_T_jk_0_tplus1_multinomiale(T_0_jk,v_j_t[k],pi_jk_T_moins1[k],beta_k_T[k])\n",
    "            T_3.append([T_jk_t_tplus1,T_jk_0_tplus1,T_0_jk])\n",
    "    return(T_3,Nu)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Les fonctions suivantes calculent les Métatables :\n",
    "<br/> <br/> La dimension de Métatable est une liste : T*K*3\n",
    "\n",
    "La fonction \"compute_T_jk_t_tplus1_et_T_jk_0_tplus1_multinomiale\" calule :   <br\\> <br\\> $$(M_{k}^{t \\Rightarrow t+1},M_{k}^{0 \\Rightarrow t+1}) \\sim Multinomiale (M_{k}^{t+1},[q,1-q]),(25)$$  <br\\> avec $$q=\\frac{w^{t+1}\\beta_{k}^t}{(1-w^{t+1})\\nu_k + w^{t+1}\\beta{k}^t} $$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def compute_M_tP1_T_0t_Tjkt(t,temps0,tempsT,M_k_ttp1,Tables,w_t,beta_tmoins1_k,gamma_t,nu,K):\n",
    "    M_3=[]\n",
    "    Tau=[]\n",
    "    if(temps0):\n",
    "        for k in range(K):\n",
    "            Tau_t_k=np.sum(np.array(Tables)[t,:,k,1])+M_k_ttp1[k]\n",
    "            Tau.append(Tau_t_k)\n",
    "            param_CRP=(gamma_t*nu[k])\n",
    "            M_tk=chinese_restaurant_process(Tau_t_k,param_CRP)\n",
    "            M_3.append([0,M_tk,M_tk])\n",
    "    elif(tempsT):\n",
    "        for k in range(K):\n",
    "            Tau_t_k=np.sum(np.array(Tables)[t,:,k,1])\n",
    "            Tau.append(Tau_t_k)\n",
    "            param_CRP=(gamma_t*w_t*beta_tmoins1_k[k])+(gamma_t*(1-w_t)*nu[k])\n",
    "            M_tk=chinese_restaurant_process(Tau_t_k,param_CRP)\n",
    "            M_jk_t_tplus1,M_jk_0_tplus1=compute_M_jk_t_tplus1_et_M_jk_0_tplus1_multinomiale(M_tk,w_t,beta_tmoins1_k[k],nu[k])\n",
    "            M_3.append([M_jk_t_tplus1,M_jk_0_tplus1,M_tk])\n",
    "    else:\n",
    "        for k in range(K):\n",
    "            Tau_t_k=np.sum(np.array(Tables)[t,:,k,1])+M_k_ttp1[k]\n",
    "            Tau.append(Tau_t_k)\n",
    "            param_CRP=(gamma_t*w_t*beta_tmoins1_k[k])+(gamma_t*(1-w_t)*nu[k])\n",
    "            M_tk=chinese_restaurant_process(Tau_t_k,param_CRP)\n",
    "            M_jk_t_tplus1,M_jk_0_tplus1=compute_M_jk_t_tplus1_et_M_jk_0_tplus1_multinomiale(M_tk,w_t,beta_tmoins1_k[k],nu[k])\n",
    "            M_3.append([M_jk_t_tplus1,M_jk_0_tplus1,M_tk])\n",
    "    return(M_3,Tau)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def compute_Tables_Metatables(T,J,v,w,pi,beta,alpha,gamma,nu,n,K):\n",
    "    Table=[]\n",
    "    MetaTable=[]\n",
    "    Nu=[]\n",
    "    for t in range(T-1,-1,-1):\n",
    "        T_t=[]\n",
    "        Nu_t=[]\n",
    "        temps_info=t\n",
    "        if (temps_info==T-1): \n",
    "            for j in range(J):\n",
    "                T_tj,Nu_t_j=compute_T_tP1_T_0t_Tjkt(0,1,0,n[t][j],v[t],pi[t-1][j],beta[t],alpha[t])\n",
    "                T_t.append(T_tj)\n",
    "                Nu_t.append(Nu_t_j)\n",
    "        elif(temps_info==0):\n",
    "            for j in range(J):\n",
    "                T_tj,Nu_t_j=compute_T_tP1_T_0t_Tjkt(1,0,np.array(Table)[T-t-2,j,:,0],n[t][j],v[t],0,beta[t],alpha[t])\n",
    "                T_t.append(T_tj)\n",
    "                Nu_t.append(Nu_t_j)\n",
    "        else:\n",
    "            for j in range(J):\n",
    "                T_tj,Nu_t_j=compute_T_tP1_T_0t_Tjkt(0,0,np.array(Table)[T-t-2,j,:,0],n[t][j],v[t],pi[t-1][j],beta[t],alpha[t])\n",
    "                Nu_t.append(Nu_t_j)\n",
    "                T_t.append(T_tj)   \n",
    "        Table.append(T_t)\n",
    "        Nu.append(Nu_t)\n",
    "    Nu=Nu[::-1]\n",
    "    Table=Table[::-1]\n",
    "    Tau=[]\n",
    "    for t in range(T-1,-1,-1):\n",
    "        temps_info=t\n",
    "        if (temps_info==T-1):\n",
    "            M_t,Tau_t=compute_M_tP1_T_0t_Tjkt(t,0,1,0,Table,w[t],beta[t-1],gamma[t],nu,K)\n",
    "            MetaTable.append(M_t)\n",
    "            Tau.append(Tau_t)\n",
    "        elif(temps_info==0):\n",
    "            M_t,Tau_t=compute_M_tP1_T_0t_Tjkt(t,1,0,np.array(MetaTable)[T-2-t,:,0],Table,w[t],0,gamma[t],nu,K)\n",
    "            MetaTable.append(M_t)\n",
    "            Tau.append(Tau_t)\n",
    "        else:\n",
    "            M_t,Tau_t=compute_M_tP1_T_0t_Tjkt(t,0,0,np.array(MetaTable)[T-2-t,:,0],Table,w[t],beta[t-1],gamma[t],nu,K)\n",
    "            MetaTable.append(M_t)\n",
    "            Tau.append(Tau_t)\n",
    "    Tau=Tau[::-1]\n",
    "    MetaTable=MetaTable[::-1]\n",
    "    return(Table,MetaTable,Tau,Nu)\n",
    "            \n",
    "#Tables,MetaTable,Tau,Nu=compute_Tables_Metatables(T,J,v,w,pi,beta,alpha,gamma,nu,N,len(beta[1])) \n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Sampling $\\nu$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Une fois les Tables et Metatables calculés, on va réapproximer les poids. <br/> <br/> $$M_k = \\sum_t M_{k}^t$$\n",
    "<br/> $$M = \\sum_k M_{k}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$G/\\xi,H,( M_k )_{k=1}^{K} \\sim DP(\\xi+M,\\frac{H+\\sum_{k=1}^K M_k \\delta_{\\phi_k}}{\\xi + M})$$\n",
    "où K est le nombre de plats distincts sur toutes les métatables. On peut représenter G de la façon suivante :\n",
    "$$ G = \\sum_{k=1}^K \\nu_k \\delta_{\\phi_k} + \\nu_u G_u$$$$ G_u\\sim DP(\\xi,H) $$$$ \\nu=(\\nu_1,...,\\nu_K,\\nu_u)\\sim Dirichlet(M_1,...,M_K,\\xi)$$ <br\\>On simule donc $\\nu$ selon une loi de dirichlet de paramètres $M_1,...,M_k,M_u$ "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Calcul de $\\beta$ ** <br\\>\n",
    "Une fois qu'on a réduit les dimensions de nos objets et conservé seulement les topics intéressants, on sample $\\beta^t$ selon 14 <br\\> \n",
    "$$(\\beta_u^t,\\beta_1^t,...,\\beta_K^t)\\sim Dirichlet(\\tilde{\\gamma^t}.(\\tilde{\\beta_u^t},\\tilde{\\beta_1^t},...,\\tilde{\\beta_K^t}))$$ avec \n",
    "\n",
    "$$\\tilde{\\gamma^t}=\\gamma^t+  TAU^t_.$$ et \n",
    "$$ \\tilde{\\beta_k^t} = \\frac{1}{\\tilde{\\gamma^t}}(\\gamma^t w^t \\beta_k^{t-1} + \\gamma^t(1 - w^t)\\nu_k+ TAU^t_k)$$\n",
    "\n",
    "<br\\>\n",
    "et\n",
    "<br\\>\n",
    "\n",
    "$$ \\tilde{\\beta_u^t} = \\frac{1}{\\tilde{\\gamma^t}}(\\gamma^t w^t \\beta_u^{t-1} + \\gamma^t(1 - w^t)\\nu_u)$$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Tau et Nu sont calculés plus haut, on s'en sert pour calculé les tildes <br\\> Les fonctions suivantes calculent respectivement, $\\tilde{\\gamma}$ , $\\tilde{\\beta}$ et $\\beta$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def compute_gamma_tilde(gamma,Tau):\n",
    "    gamma_tilde=gamma+np.sum(np.array(Tau),axis=1)\n",
    "    return(gamma_tilde)\n",
    "#gamma_tilde=compute_gamma_tilde(gamma,Tau)\n",
    "\n",
    "\n",
    "def compute_beta_t_tilde(t,gamma_t,gamma_tilde_t,w_t,beta_tmoins1,nu,tau_t):\n",
    "    beta_t_tilde=[]\n",
    "    if(t!=0):\n",
    "        for k in range(len(nu)-1):\n",
    "            if(gamma_tilde_t>0):\n",
    "                beta_t_k_tilde=(1/gamma_tilde_t)*((gamma_t*w_t*beta_tmoins1[k])+(gamma_t*(1-w_t)*nu[k]+tau_t[k]))\n",
    "                beta_t_tilde.append(beta_t_k_tilde)\n",
    "            else:\n",
    "                beta_t_tilde.append(0) \n",
    "        if(gamma_tilde_t>0):\n",
    "            beta_t_u_tilde=(1/gamma_tilde_t)*((gamma_t*w_t*beta_tmoins1[len(nu)-1])+(gamma_t*(1-w_t)*nu[len(nu)-1]))\n",
    "            beta_t_tilde.append(beta_t_u_tilde)\n",
    "        else: \n",
    "            beta_t_tilde.append(0) \n",
    "    else:\n",
    "        for k in range(len(nu)-1):\n",
    "            if(gamma_tilde_t):\n",
    "                beta_t_k_tilde=(1/gamma_tilde_t)*(gamma_t*nu[k]+tau_t[k])\n",
    "                beta_t_tilde.append(beta_t_k_tilde)\n",
    "            else:\n",
    "                beta_t_tilde.append(0)  \n",
    "        if(gamma_tilde_t):        \n",
    "            beta_t_u_tilde=(1/gamma_tilde_t)*(gamma_t*nu[len(nu)-1])\n",
    "            beta_t_tilde.append(beta_t_u_tilde)\n",
    "        else:\n",
    "            beta_t_tilde.append(0)  \n",
    "\n",
    "    return(beta_t_tilde)\n",
    "\n",
    "def compute_new_beta(gamma,w,nu,tau):\n",
    "    beta_new=[]\n",
    "    gamma_tilde=compute_gamma_tilde(gamma,tau)\n",
    "    for t in range(len(gamma_tilde)):\n",
    "        if(t==0):\n",
    "            beta_t_tilde=compute_beta_t_tilde(t,gamma[t],gamma_tilde[t],w[t],None,nu,tau[t])\n",
    "        else:\n",
    "            beta_t_tilde=compute_beta_t_tilde(t,gamma[t],gamma_tilde[t],w[t],beta_new[t-1],nu,tau[t])\n",
    "        params_dirich=gamma_tilde[t]*np.array(beta_t_tilde)\n",
    "        beta_t=dirichlet_generate_random(params_dirich)\n",
    "        beta_new.append(beta_t.tolist())\n",
    "    return(beta_new)\n",
    "#new_beta=compute_new_beta(gamma,w,nu,Tau)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Calcul de $\\pi$ ** <br\\>\n",
    "De même que pour $\\beta$, on calcule $\\pi$ de ma façon suivante : <br\\>\n",
    "\n",
    "$$(\\pi_{ju}^t,\\pi_{j1}^t,...,\\pi_{jK}^t)\\sim Dirichlet(\\tilde{\\alpha_{0j}^t}.(\\tilde{\\pi_{ju}^t},\\tilde{\\pi_{j1}^t},...,\\tilde{\\pi_{jK}^t}))$$ avec \n",
    "\n",
    "$$\\tilde{\\alpha}_{0j}^t=\\alpha_0^t+  N^t_{j.}$$ et \n",
    "$$ \\tilde{\\pi_{jk}^t} = \\frac{1}{\\tilde{\\alpha_0^t}}(\\alpha_0^t v^t \\pi_{jk}^{t-1} + \\alpha_0^t(1 - v^t)\\beta_k^t+ N^t_{jk})$$\n",
    "\n",
    "<br\\>\n",
    "et\n",
    "<br\\>\n",
    "\n",
    "$$ \\tilde{\\pi_{ju}^t} = \\frac{1}{\\tilde{\\alpha_0^t}}(\\alpha_0^t v^t \\pi_{jk}^{t-1} + \\alpha_0^t(1 - v^t)\\beta_k^t$$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "  \n",
    "def compute_alpha_tilde(alpha,Nu):\n",
    "    alpha_tilde=[]\n",
    "    for j in range(len(Nu[0])):\n",
    "        alpha_tilde.append((alpha+np.sum(np.array(Nu)[:,j,:],axis=1)).tolist())\n",
    "    alpha_tilde=np.transpose(np.array(alpha_tilde))\n",
    "    return(alpha_tilde.tolist())\n",
    "#alpha_tilde=compute_alpha_tilde(alpha,Nu)\n",
    "\n",
    "def compute_pi_t_j_tilde(t,j,alpha_0_t,alpha_0_t_tilde,v_t,pi_tmoins1_j,beta_t,Nu_t_j):\n",
    "    pi_t_j_tilde=[]\n",
    "    if(t!=0):\n",
    "        for k in range(len(beta_t)-1):\n",
    "            if(alpha_0_t_tilde>0):\n",
    "                pi_t_j_tilde_k=(1/alpha_0_t_tilde)*(alpha_0_t*v_t[k]*pi_tmoins1_j[k]+alpha_0_t*(1-v_t[k])*beta_t[k]+Nu_t_j[k])\n",
    "                pi_t_j_tilde.append(pi_t_j_tilde_k)\n",
    "            else:\n",
    "                pi_t_j_tilde.append(0)\n",
    "    else:\n",
    "        for k in range(len(beta_t)-1):\n",
    "            if(alpha_0_t_tilde>0):\n",
    "                pi_t_j_tilde_k=(1/alpha_0_t_tilde)*(alpha_0_t*beta_t[k]+Nu_t_j[k])\n",
    "                pi_t_j_tilde.append(pi_t_j_tilde_k)\n",
    "            else:\n",
    "                pi_t_j_tilde.append(0)\n",
    "    if(alpha_0_t_tilde>0):\n",
    "        pi_t_j_tilde_u=(1/alpha_0_t_tilde)*(alpha_0_t*beta_t[len(beta_t)-1])\n",
    "    else:pi_t_j_tilde_u=0\n",
    "    pi_t_j_tilde.append(pi_t_j_tilde_u)\n",
    "    return(pi_t_j_tilde)\n",
    "\n",
    "def compute_new_pi(alpha_0,v,beta,gamma,w,Nu):\n",
    "    pi_new=[]\n",
    "    alpha_tilde=compute_alpha_tilde(alpha_0,Nu)\n",
    "    for t in range(len(alpha_0)):\n",
    "        pi_new_t=[]\n",
    "        if(t==0): \n",
    "            for j in range(len(Nu[0])):\n",
    "                    pi_new_t_j_tilde=compute_pi_t_j_tilde(t,j,alpha_0[t],alpha_tilde[t][j],v[t],None,beta[t],Nu[t][j])\n",
    "                    params_dirich=alpha_tilde[t][j]*np.array(pi_new_t_j_tilde)\n",
    "                    pi_new_t_j=list(dirichlet_generate_random(params_dirich))\n",
    "                    pi_new_t.append(pi_new_t_j)\n",
    "        else:            \n",
    "            for j in range(len(Nu[0])):\n",
    "                    pi_new_t_j_tilde=compute_pi_t_j_tilde(t,j,alpha_0[t],alpha_tilde[t][j],v[t],pi_new[t-1][j],beta[t],Nu[t][j])\n",
    "                    params_dirich=alpha_tilde[t][j]*np.array(pi_new_t_j_tilde)\n",
    "                    pi_new_t_j=list(dirichlet_generate_random(params_dirich))\n",
    "                    pi_new_t.append(pi_new_t_j)\n",
    "        pi_new.append(pi_new_t)\n",
    "    return(pi_new)\n",
    "                    \n",
    "#new_pi=compute_new_pi(alpha,v,beta,gamma,w,Nu)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Si on a plusieurs topics présents par corpus, cette fonction en extrait les plus fréquents. <br\\> Cette fonction permet de vérifier nos résultats"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def get_best_topic_from_pi(K,N,average_phi,name_word,nom_test,boucle_number,param_w,k):\n",
    "    bestAll=[]\n",
    "    file = open(\"K_{}_w_{}_test_{}.txt\".format(k,param_w,nom_test),\"a\") \n",
    "    file.write(\"\\n\\n\\n___________Boucle{}___________\\n\\n\".format(boucle_number))\n",
    "    file.write(\"\\n\\n----- {} topics en tout -----:\".format(K)) \n",
    "    df=pd.DataFrame()\n",
    "    topic=[]\n",
    "    prob=[]\n",
    "    for t in range(len(N)):\n",
    "        list_info_t=[]\n",
    "        file.write(\"\\n\\n----- Temps {} -----:\".format(t)) \n",
    "        for j in range(len(N[t])):\n",
    "            file.write(\"\\nCorpus {}:\\n\\n\".format(j))\n",
    "            best=np.argsort(-np.array(N[t][j]))\n",
    "            list_info_t_j=''\n",
    "            for i in range(min(4,len(best)-1)):\n",
    "                if(N[t][j][best[i]] !=0):\n",
    "                    file.write(\"\\nTopic #{}={} with {} docs \\n\" .format(i,best[i],N[t][j][best[i]]))\n",
    "                    list_info_t_j+='{}({}),'.format(best[i],N[t][j][best[i]])   \n",
    "                    if(nom_test=='small_doc'):\n",
    "                        best_words=np.argsort(-np.array(average_phi[best[i]]))\n",
    "                        for u in range(5):\n",
    "                            file.write(\"Best words #{} = {} with p= {}\\n\".format(u,name_word[best_words[u]],average_phi[best[i]][best_words[u]]))\n",
    "                    elif(nom_test=='synthetic_data'):\n",
    "                        file.write(\"p1={},p2={}\\n\".format(average_phi[best[i]][0],average_phi[best[i]][1]))\n",
    "                        if(best[i] not in topic):\n",
    "                           topic.append(best[i])\n",
    "                           prob.append(average_phi[best[i]][0])  \n",
    "            list_info_t.append(list_info_t_j)\n",
    "        df['time_{}'.format(t)]=list_info_t\n",
    "    d = {'#topic': topic, 'proba_1': prob}\n",
    "    df_topic=pd.DataFrame(data=d)\n",
    "    file.close() \n",
    "    return(df,df_topic)\n",
    "    \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "On **resample** chaque observation en suivant (20), (21) et l'information à posteriori donnée par (4.5). <br\\>\n",
    "En sortie, on a le topic le plus à même d'étre lié avec l'observation ainsi que la moyenne de chaque topic. \n",
    "En effet chaque: $\\phi_k \\sim Dir(param)$ où param est calculé à posteriori. La moyenne de chaque r.v. nous informe sur le topic et nous permet de faire des comparaisons avec les résultats obtenus en Table 2 de l'article.\n",
    "<br/> <br/>\n",
    "On peut calculer $$ P(z_{ji}^t=k / x_{ji}^t)\\sim P(z_{ji}^t=k/ \\pi_j^t).P(x_{ji}^t/ z_{ji}^t=k...)$$\n",
    "On sait que $$ P(z_{ji}^t=k/ \\pi_j^t) = \\pi_{jk}^t $$\n",
    "De plus, $$ P(x_{ji}^t/ z_{ji}^t=k...) = \\frac{\\Gamma(n+1) \\Gamma (\\sum_{a\\in A,w}^{W} X_{aw} +\\alpha_w)\n",
    " \\prod_{w=1}^{W} \\Gamma (\\alpha_w + x_{jiw}^t+ \\sum_{a\\in A} X_{aw}) }{\\Gamma (\\sum_{a\\in A,w}^{W} X_{aw} +\\alpha_w + x_{jiw}^t)  \\prod_{w=1}^{W} [\\Gamma ( x_{jiw}^t +1) \\Gamma (\\alpha_w + \\sum_{a\\in A} X_{aw}) ]} $$\n",
    "Avec $A = ((i,j,t),Z_{ji}^t=k)$\n",
    "<br/> \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Après normalisation des $P(z_{ji}^t=k / x_{ji}^t)$, on selectionne un nouveau topic pour chaque document. \n",
    "<br\\> On retourne aussi la moyenne des $\\phi_k \\sim Dir(\\alpha_1 + \\sum_{a\\in A} X_{a1},...,\\alpha_W + \\sum_{a\\in A} X_{aW}) $ "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def get_new_z_i_j_t_egal_k(last_iteration,t,j,i,x_i_j_t,X,Z,pi_jt,W):\n",
    "    proba=[]\n",
    "    log_proba=[]\n",
    "    average_phi=[]\n",
    "    \n",
    "    Z_with_no_Xijt=copy.deepcopy(Z)\n",
    "    X_with_no_Xijt=copy.deepcopy(X)\n",
    "    del Z_with_no_Xijt[t][j][i]\n",
    "    del X_with_no_Xijt[t][j][i]\n",
    "    \n",
    "    for k,pi_jtk in enumerate(pi_jt):\n",
    "        average_phi_k=[]\n",
    "        flat_Z=[item for y in Z_with_no_Xijt for x in y for item in x]\n",
    "        flat_X=[item for y in X_with_no_Xijt for x in y for item in x]\n",
    "        Z_tij_k=[doc for doc,topic in zip(flat_X,flat_Z) if (topic==k)]\n",
    "        produit_numerateur=1\n",
    "        produit_denominateur_1=1\n",
    "        produit_denominateur_2=1\n",
    "        a=np.sum(Z_tij_k)\n",
    "        b=np.sum(x_i_j_t)\n",
    "        for w in range(W):\n",
    "            if(len(Z_tij_k)==0):\n",
    "                c=0\n",
    "            else:\n",
    "                c=np.sum(Z_tij_k,axis=0)[w] \n",
    "            if(last_iteration):\n",
    "                average_phi_k.append(((1/W)+c)/(1+a))\n",
    "            produit_numerateur+=gammaln(x_i_j_t[w]+(1/W)+c)\n",
    "            produit_denominateur_1+=gammaln((1/W)+c)\n",
    "            produit_denominateur_2+=gammaln(1+x_i_j_t[w])\n",
    "        log_proba.append(pi_jt[k]*mpmath.exp((gammaln(len(x_i_j_t)+1)+gammaln(a+1)+produit_numerateur)-(produit_denominateur_2+produit_denominateur_1+gammaln(a+b+1))))\n",
    "        average_phi.append(average_phi_k) \n",
    "    somme=sum(log_proba)\n",
    "    for k,pi_jtk in enumerate(pi_jt):\n",
    "        log_proba[k]=float(log_proba[k]/somme) \n",
    "    max_indice=np.random.choice(len(pi_jt),1,p=log_proba)  \n",
    "    return(max_indice,average_phi)\n",
    "#newZ=get_new_z_i_j_t_egal_k(1,0,0,0,data[0][0][0],data,Z,pi[0][0],W)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "On resample **toutes** les observations et on obtient les nouveaux N, qui sont les compteurs d'assignation aux topics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def get_new_Z(data,pi,Z,W,K):\n",
    "    T=np.random.permutation(len(data))\n",
    "    for t in T:\n",
    "        J=np.random.permutation(len(data[t]))\n",
    "        #print('Temps{}'.format(t))\n",
    "        for j in J:\n",
    "            I=np.random.permutation(len(data[t][j]))\n",
    "            for i in I:\n",
    "                #if(i%2==0):\n",
    "                    #print(i)\n",
    "                if(t==(len(data)-1) and j==(len(data[t])-1) and i==(len(data[t][j])-1)):\n",
    "                    Z[t][j][i],average_phi=get_new_z_i_j_t_egal_k(1,t,j,i,data[t][j][i],data,Z,pi[t][j],W)\n",
    "                else:\n",
    "                    Z[t][j][i],unused_var=get_new_z_i_j_t_egal_k(0,t,j,i,data[t][j][i],data,Z,pi[t][j],W)\n",
    "    N=[]\n",
    "    for t in range(len(data)):\n",
    "        N_t=[]\n",
    "        for j in range(len(data[t])):\n",
    "                N_t.append(compute_n_t_j(K,Z[t][j]))\n",
    "        N.append(N_t)\n",
    "    return(Z,N,average_phi)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def sampling_xi(K,a,b,last_xi,M,nb_iter):\n",
    "    new_xi=last_xi\n",
    "    for i in range(nb_iter):\n",
    "        eta=beta_generate_random([new_xi+1,M])\n",
    "        kai=a+K-1\n",
    "        bet=b-np.log(eta)\n",
    "        if(random.uniform(0, 1)<kai/kai+(M*bet)):\n",
    "            kai+=1\n",
    "        new_xi=np.random.gamma(kai,bet)\n",
    "    return(new_xi)\n",
    "\n",
    "#new_xi=sampling_xi(10,10,1,12,26,20)\n",
    "\n",
    "def sampling_alpha_t(t,alpha_a,alpha_b,last_alpha_t,tables,nb_iter,doc):\n",
    "    J=len(tables[0])\n",
    "    if(type(tables[t][0][0][2])==int):\n",
    "        m=tables[t][0][0][2]\n",
    "    else:\n",
    "        m=np.sum(tables[t][:][:][2])\n",
    "    a=alpha_a+m\n",
    "    b=alpha_b\n",
    "    al=last_alpha_t\n",
    "    for i in range(nb_iter):\n",
    "        a=alpha_a+m\n",
    "        b=alpha_b\n",
    "        for j in range(J):\n",
    "            w=beta_generate_random([al+1,len(doc[j])])\n",
    "            t=len(doc[j])/al\n",
    "            s=(random.uniform(0, 1)<(t/(t+1)))\n",
    "            a-=s\n",
    "            b-=np.log(w)\n",
    "        al=np.random.gamma(a,b)\n",
    "    return(al)\n",
    "\n",
    "def sampling_gamma_t(t,gamma_a,gamma_b,last_gamma_t,metatables,nb_iter,nb_doc_temps_t):\n",
    "    m=np.sum(metatables[t][:][2])\n",
    "    a=gamma_a+m\n",
    "    b=gamma_b\n",
    "    al=last_gamma_t\n",
    "    for i in range(nb_iter):\n",
    "        a=gamma_a+m\n",
    "        b=gamma_b\n",
    "        w=beta_generate_random([al+1,nb_doc_temps_t])\n",
    "        t=nb_doc_temps_t/al\n",
    "        s=(random.uniform(0, 1)<(t/(t+1)))\n",
    "        a-=s\n",
    "        b-=np.log(w)\n",
    "        al=np.random.gamma(a,b)\n",
    "    return(al)\n",
    "            \n",
    "    \n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ALGORITHME"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 100/100 [17:32<00:00, 10.52s/it]\n"
     ]
    }
   ],
   "source": [
    "def algo_evo_hdp(synthetic_data,small_doc,GOT,max_iter,param_w,K):\n",
    "    #----Create Data----#\n",
    "    k_init=K\n",
    "    if(synthetic_data):\n",
    "        nom_test='synthetic_data'\n",
    "        corpora_sizes=[[50,30,40],[51,32,43],[52,32,43],[53,34,45]]\n",
    "        true_phi=np.zeros((8,2))\n",
    "        true_phi[0]=[0.1,0.9]\n",
    "        true_phi[1]=[0.2,0.8]\n",
    "        true_phi[2]=[0.3,0.7]\n",
    "        true_phi[3]=[0.4,0.6]\n",
    "        true_phi[4]=[0.5,0.5]\n",
    "        true_phi[5]=[0.6,0.4]\n",
    "        true_phi[6]=[0.7,0.3]\n",
    "        true_phi[7]=[0.8,0.2]\n",
    "        T=4\n",
    "        J=3\n",
    "        W=2\n",
    "        name_word=0\n",
    "        info_data=local_components_and_corpora_sizes(T,J,corpora_sizes)    \n",
    "        data=generate_data_from_mixture_of_multinomials(T,J,info_data,true_phi) \n",
    "    elif(small_doc):\n",
    "        nom_test='small_doc'\n",
    "        data,T,J,W,name_word=generate_easy_data(stop_words)\n",
    "    elif(GOT):\n",
    "        nom_test='GOT'\n",
    "        data,name_doc,T,J,W,name_word=generate_GOT_data(stop_words)   \n",
    "    else:\n",
    "        print(\"error\")\n",
    "        \n",
    "    nb_doc_par_temps=np.zeros((T))\n",
    "    for t in range(T):\n",
    "        for j in range(J):\n",
    "            nb_doc_par_temps[t]+=len(data[t][j])\n",
    "    \n",
    "    #----Initialize Hyperparameters----#\n",
    "    a_xi=10\n",
    "    b_xi=1\n",
    "    xi=np.random.gamma(a_xi,b_xi)\n",
    "    a_gamma=10\n",
    "    b_gamma=1\n",
    "    a_alpha=10\n",
    "    b_alpha=1\n",
    "    gamma=[]\n",
    "    gamma=[np.random.gamma(a_gamma,b_gamma) for i in range(T)]\n",
    "    alpha=[]\n",
    "    alpha=[np.random.gamma(a_alpha,b_alpha) for i in range(T)]\n",
    "    v=T*[K*[param_w]]\n",
    "    w=T*[param_w]\n",
    "    #----Initialize parameters----#\n",
    "    params_loi_H=0.5\n",
    "    nu=stick_breaking(xi,K,W)\n",
    "    beta=initialize_G_0_t(gamma,nu,T,K,w)\n",
    "    pi=initialize_G_j_t(beta,alpha,J,T,K,v)\n",
    "    Z,N=randomly_assign_Z_initialisation(T,J,K,data)\n",
    "    #----Iterate Cascaded Gibbs Sampler----#\n",
    "    for i in tqdm(range(max_iter)):\n",
    "        boucle_number=i\n",
    "        #print(\"++++++ITERATION++++++: {}\".format(i))\n",
    "        Tables,MetaTable,Tau,Nu=compute_Tables_Metatables(T,J,v,w,pi,beta,alpha,gamma,nu,N,K)\n",
    "        #on conserve seulement les topics qui ont été choisis pour décrire au moins un document.\n",
    "        M_k=np.sum(np.array(MetaTable)[:,:,2],axis=0)\n",
    "        M=np.sum(M_k)\n",
    "        liste_indice=np.nonzero(M_k)\n",
    "        M_k=M_k[M_k>0]\n",
    "        param_dir=list(M_k)\n",
    "        K=len(param_dir)\n",
    "        \n",
    "        ## sampling xi,gamma,alpha\n",
    "        xi=sampling_xi(K,a_xi,b_xi,xi,M,20)\n",
    "        \n",
    "        new_gamma=[]\n",
    "        new_alpha=[]\n",
    "        for t in range(T):\n",
    "            new_gamma.append(sampling_gamma_t(t,a_gamma,b_gamma,gamma[t],MetaTable,20,nb_doc_par_temps[t]))\n",
    "            new_alpha.append(sampling_alpha_t(t,a_alpha,b_alpha,alpha[t],Tables,20,data[t]))\n",
    "        gamma=new_gamma\n",
    "        alpha=new_alpha\n",
    "        \n",
    "        #on ajoute un topic pour l'itération suivante\n",
    "        param_dir.append(xi)\n",
    "        nu=dirichlet_generate_random(param_dir)\n",
    "        beta=compute_new_beta(gamma,w,nu,Tau)\n",
    "        pi=compute_new_pi(alpha,v,beta,gamma,w,Nu)\n",
    "        K=len(beta[0])\n",
    "        \n",
    "            \n",
    "        v=T*[K*[param_w]]\n",
    "        Z,N,average=get_new_Z(data,pi,Z,W,K)\n",
    "    #----Print Mean of topic and N----#\n",
    "        #print('---Beta=---:\\n{}'.format(beta))\n",
    "        #print('---Pi=---:\\n{}'.format(pi))\n",
    "        #print('---Average---=\\n{}\\n'.format(average)) \n",
    "        #print('---N---:\\n{}'.format(N))\n",
    "        if (0==(i+1)%20):\n",
    "            df,df_topic=get_best_topic_from_pi(K,N,average,name_word,nom_test,boucle_number,param_w,k_init)\n",
    "            df\n",
    "            df_topic\n",
    "            \n",
    "        #print('---Tau---\\n{}'.format(Tau))\n",
    "        #print('---Nu---\\n{}'.format(Nu))\n",
    "        #print('---Tables---\\n{}'.format(Tables))\n",
    "        #print('---MetaTable---\\n{}'.format(MetaTable))\n",
    "    return(df,df_topic)\n",
    "\n",
    "'''\n",
    "k=[10,100,500]\n",
    "max_iter=40\n",
    "\n",
    "w=[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]\n",
    "writer = pd.ExcelWriter('output..xlsx')\n",
    "for init in range(len(k)):\n",
    "    writer = pd.ExcelWriter('Output_k=_{}.xlsx'.format(k[init]))\n",
    "    for i in range(len(w)):\n",
    "        print(\"Iteration {}\\{}\".format(i+len(w)*init,len(w)*len(k)))\n",
    "        df_nb_doc_topic,df_topic=algo_evo_hdp(1,0,0,max_iter,w[i],k[init])\n",
    "        df_nb_doc_topic.to_excel(writer,sheet_name='k_{}_w_{}'.format(k[init],w[i]),startrow=0 , startcol=0)    \n",
    "        df_topic.to_excel(writer,sheet_name='k_{}_w_{}'.format(k[init],w[i]),startrow=5, startcol=0,index = False) \n",
    "    writer.save()               \n",
    "'''\n",
    "# Les lignes ci dessus doivent être décommentées pour étudier l'effet des paramètres sur l'analyse synthetic data.\n",
    "# Dans ce cas, le modèle peut prend 15 heures sur une mémoire RAM de 4Go\n",
    "\n",
    "# Pour un test de l'algo, décommentez la ligne suivante : \n",
    "#df_nb_doc_topic,df_topic=algo_evo_hdp(1,0,0,100,0.8,50)\n",
    "\n",
    "# Les 3 premiers paramètres de l'algorithme : \n",
    "# algo_evo_hdp(synthetic_data,small_doc,GOT,max_iter,param_w,K)\n",
    "\n",
    "#(1,0,0) pour l'expérimentation sur données synthétiques\n",
    "#(0,1,0) pour l'expérimentation small_doc    \n",
    "#(0,0,1) pour l'expérimentation GOT  \n",
    "\n",
    "# max_iter est le nombre d'itérations\n",
    "# param_w est l'initialisation de w, entre 0 et 1, plus w est grand, plus les temps et corpus sont corrélés entre eux\n",
    "# K est le nombre de topic fixé en initialisation. Le modèle stipule que K est infini pour la première itération.\n",
    "# K doit donc être relativement élevé\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>time_0</th>\n",
       "      <th>time_1</th>\n",
       "      <th>time_2</th>\n",
       "      <th>time_3</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>7(22),4(14),3(13),12(1),</td>\n",
       "      <td>6(19),3(15),4(15),8(2),</td>\n",
       "      <td>6(26),4(18),1(7),8(1),</td>\n",
       "      <td>5(17),1(14),6(13),9(7),</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4(12),6(9),3(7),12(2),</td>\n",
       "      <td>1(9),4(9),6(9),12(3),</td>\n",
       "      <td>6(20),1(5),5(4),9(2),</td>\n",
       "      <td>0(12),9(11),1(6),5(4),</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1(13),6(10),4(8),8(6),</td>\n",
       "      <td>1(13),6(13),5(11),9(3),</td>\n",
       "      <td>9(20),1(11),5(6),0(5),</td>\n",
       "      <td>0(16),9(13),2(7),5(6),</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                     time_0                   time_1                  time_2  \\\n",
       "0  7(22),4(14),3(13),12(1),  6(19),3(15),4(15),8(2),  6(26),4(18),1(7),8(1),   \n",
       "1    4(12),6(9),3(7),12(2),    1(9),4(9),6(9),12(3),   6(20),1(5),5(4),9(2),   \n",
       "2    1(13),6(10),4(8),8(6),  1(13),6(13),5(11),9(3),  9(20),1(11),5(6),0(5),   \n",
       "\n",
       "                    time_3  \n",
       "0  5(17),1(14),6(13),9(7),  \n",
       "1   0(12),9(11),1(6),5(4),  \n",
       "2   0(16),9(13),2(7),5(6),  "
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#en colonnes les corpus \n",
    "# en ligne les temps \n",
    "# structure identique à celle de l'article \n",
    "df_nb_doc_topic"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>#topic</th>\n",
       "      <th>proba_1</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>7</td>\n",
       "      <td>0.097819</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4</td>\n",
       "      <td>0.301808</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>3</td>\n",
       "      <td>0.198472</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>12</td>\n",
       "      <td>0.248751</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>6</td>\n",
       "      <td>0.416712</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>1</td>\n",
       "      <td>0.506666</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>8</td>\n",
       "      <td>0.467344</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>5</td>\n",
       "      <td>0.576931</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>9</td>\n",
       "      <td>0.646951</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>0</td>\n",
       "      <td>0.730420</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>2</td>\n",
       "      <td>0.837289</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    #topic   proba_1\n",
       "0        7  0.097819\n",
       "1        4  0.301808\n",
       "2        3  0.198472\n",
       "3       12  0.248751\n",
       "4        6  0.416712\n",
       "5        1  0.506666\n",
       "6        8  0.467344\n",
       "7        5  0.576931\n",
       "8        9  0.646951\n",
       "9        0  0.730420\n",
       "10       2  0.837289"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# La première colonne correspond au numéro du topic \n",
    "# La seconde est la première coordonnées\n",
    "df_topic"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Conclusion "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "L'algorithme est très lent car l'optimisation est faîte par Gibbs Sampling. <br\\><br\\>\n",
    "** Pour l'analyse sur les synthetic data** <br\\>\n",
    "Après un nombre assez faible d'itération (environ 5), on retrouve les topics décrivant chacun des lots de données.\n",
    "Les résultats peuvent s'avérer approximatifs selon les paramètres donnés et le comportement de l'algorithme sur un grand nombre d'itérations n'a pas été observé.\n",
    "\n",
    "**Pour l'analyse d'un petit nombre de documents**<br\\>\n",
    "Les topics données sont cohérents mais les documents sont très basiques.\n",
    "\n",
    "**Pour l'analyse Game Of Thrones**<br\\>\n",
    "On retrouve les noms des personnages principaux dans les topics. Cependant, les corpus de documents tendent à converger vers un seul topic, qui contient le nom de tous les personnages principaux présents du début à la fin de la série. \n",
    "\n",
    "**Points à améliorer :** <br\\>\n",
    "- Chercher une base de documents plus intéréssante pour étudier les résultats de l'algorithme\n",
    "- Optimiser l'algorithme\n",
    "- Etudier les approches AEVB, ADVI pour optimiser les paramètre du modèle. \n",
    "- Etudier le \"component collapsing\" (Dinh & Dumoulin, 2016) où le modèle reste bloqué sur un minimum local avec des topics identiques (ce qui est le cas pour l'analyse Game Of Thrones)\n",
    "- Etudier les articles proposés par Nadi, plus récents et combinant réseaux de neuronnes et topic modelling."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
