{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Skipping line 2607: expected 1 fields, saw 9\n",
      "Skipping line 3143: expected 1 fields, saw 2\n",
      "Skipping line 3173: expected 1 fields, saw 8\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "neg=pd.read_csv('../data/neg.csv',header=None,index_col=None)\n",
    "pos=pd.read_csv('../data/pos.csv',header=None,index_col=None,error_bad_lines=False)\n",
    "neu=pd.read_csv('../data/neutral.csv', header=None, index_col=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0       做为一本声名在外的流行书，说的还是广州的外企，按道理应该和我的生存环境差不多啊。但是一看之下...\n",
       "1       作者完全是以一个过来的自认为是成功者的角度去写这个问题，感觉很不客观。虽然不是很喜欢，但是，...\n",
       "2            作者提倡内调，不信任化妆品，这点赞同。但是所列举的方法太麻烦，配料也不好找。不是太实用。\n",
       "3       作者的文笔还行，但通篇感觉太琐碎，有点文人的无病呻吟。自由主义者。作者的品性不敢苟同，无民族...\n",
       "4                           作者倒是个很小资的人,但有点自恋的感觉,书并没有什么大帮助\n",
       "5       作为个人经验在网上谈谈可以，但拿来出书就有点过了，书中还有些明显的谬误。不过文笔还不错，建议...\n",
       "6       昨天刚兴奋地写了评论,今天便遇一闹心事,因把此套书推荐给很多朋友,朋友就拖我在网上购,结果前...\n",
       "7       纵观整部书（上下两册）  从文字，到结构，人物，情节 没有一个地方是可取的虽然有过从业经验 ...\n",
       "8       字很大,内容不够充实当初看大家评论说得很好才买的但实际上却没那么好,感觉深度也不够如果你还在...\n",
       "9                            中国社会科学出版社出的版本可能有删节，但未查到相关说明。\n",
       "10      纸张的质量也不好，文字部分更是倾斜的，盗版的很不负责任，虽然不评价胡兰成本人，但是文字还是美...\n",
       "11      职场如战场在这部小说里被阐述的淋漓尽致，拉拉工作勤奋如老黄牛，但性格却更似倔牛；王伟虽正直但...\n",
       "12      只因李安的电影《色，戒》，才买了张爱玲的小说来读。读后的感觉是失望——那么短浅，迷惑——李导...\n",
       "13      之前看到大家都说非常好 于是 很心动 也买了本 回来后看看 非常一般它讲很多就是要我们承认自...\n",
       "14                      整本书没几个英文,好像小孩子看的,但是小孩子也许看不懂的那种.失望\n",
       "15      整本书给我的感觉就是一农民暴富了后害怕别人也富，挤占了他的地位。但又不想把害怕的思想暴露底那...\n",
       "16      这书写得很乱， 不系统， 不正规。而且废话， 大话连篇， 好不容易说到正题了， 比如该如何用...\n",
       "17        这是一本小说集，好多章节故事，我都已经看过了。书印刷还是不错的。但是还是买整套的好，有点后悔。\n",
       "18      这是我这一年内看到最差的一本书，我用不到2个小时看完（我实在不愿意在这本书上浪费太多时间），...\n",
       "19      这是我看过文字写得很糟糕的书，因为买了，还是耐着性子看完了，但是总体来说不好，文字、内容、结...\n",
       "20      这几天旅行中也在看《杜拉拉升职记》，的确是值得推荐的职场读物。虽然是虚拟的，但很多skill...\n",
       "21      这个作家的书真的很一般,除非和她有交叉的经历,否则很难找到感觉.细腻是日本人的长处,但很平凡...\n",
       "22      这个书首位的排名言过其实，感觉比不上圈子全套。1）故事里面的故事很多牵强赴会，逻辑性差。2）...\n",
       "23               这本书总体来说还可以，但是有些地方自己不是很懂，比如按穴位都不知道大约位置在那里\n",
       "24       这本书完全是根据这部电影走红后出版的商业书。内容很不全。里面收集了张的很多长篇小说，但都简化了。\n",
       "25      这本书虽然内容不多，但是插图很大，很生动和形象，里面的故事都是小孩子碰到过的事情，并且很有启...\n",
       "26      这本书送到的时候就是全湿了,但是就给你们的客服打电话了,说会安排人过来取书.并办退款手续.可...\n",
       "27                    这本书是本着是本热销书看的，但看完后觉得没什么意思，或者有些做作。。。\n",
       "28      这本书没有我想象的好，书本身的质量和印刷还可以 ，但内容与书名不符，应该改名字；而且有很多牵...\n",
       "29      这本书买之前是看了这么多的五星评价而才买来的,以为应该应值得拥有的一本书!看完这样书后,在我...\n",
       "                              ...                        \n",
       "4325    一直都知道LG-C960拥有目前手机业界中的较高像素，其突出想象的外形设计令人匪夷所思.虽然...\n",
       "4326    1、滑盖设计，纤薄机身，按键方式很有个性，电容触控式，只要用手指遮挡住相应功能键的红色灯即可...\n",
       "4327    这手机外观真的是太漂亮，如果你看到真机我觉得你一定会爱上她的美！而且屏幕我觉得颜色也很鲜艳！...\n",
       "4328     1.外形不错。我喜欢直板机，估计看我这个帖子的朋友也都是这样；我的是全黑的，很酷；还有一款...\n",
       "4329    屏做的真不错，跟同事一起对比了一下，我这个屏明显比他那个机子强，亮但是不刺眼，我同事调了亮度...\n",
       "4330    外观不错，感觉比A800那种正统外形显得活泼多了，喜欢！屏幕比较大，色彩也好，就是拍出照片效...\n",
       "4331    摄像头做的不错，片子出来很清楚而且颜色好，这机子内存也够大，我总爱带个机子逛街，淘到好东西先...\n",
       "4332    功能还算实在，尽管不是很好，但是，内外双屏的效果还不错，铃声也行了，手写笔慢是慢了点，但比起...\n",
       "4333    1,外观，还可以！比较好看2,屏幕，在阳光下不好看清楚！在其他的地方还是很好的！当然没有TF...\n",
       "4334    一直用的这只机子，虽然功能不多，但是手机本身应有的功能全有了，我感觉没什么不好。那些加了比如...\n",
       "4335    电池做的不错，连打电话加发短信，我能用一周，一般周6会充电，现在已成习惯了!机身够薄，机子也...\n",
       "4336    继承了西门子键盘快捷功能菜单的功能，总共有15个按键可以自定义功能，按照自己的习惯定义好后基...\n",
       "4337    元旦拿到了这款机子，在充了一夜的电，用了1天后，越来越发觉了这款机子的一些优越性能，在此简单...\n",
       "4338    M100不错的机子,整体设计简洁,直板机只有一个屏,所以省是不用说的了,6万5千色的屏,差不...\n",
       "4339    开机声音很响亮！这个“16和弦”声音特别响亮，表现效果很出众，有点Z2的感觉。铃声够大，放包...\n",
       "4340    1.书写功能方面：有很多像SonyEricsson的手写部分都会把用户局限在一个很小的区域内...\n",
       "4341    设计独特，纯粹的MP3造型,加上轻巧的机身，拿在手里有\"一见钟情\"的感觉;因为很想买一个MP...\n",
       "4342    用这只机子很舒服，信号方面不错，机子运行也稳定，通话是声音比较清楚，室外我也试过，听着没问题...\n",
       "4343    机子不大不小，我用很合适；外屏幕小了点，但是看来电号码还是够用，很方便；自动开关机功能我很喜...\n",
       "4344             由于是寄回老家的没有看到东西，但听家里人说还不错，还没有安装。有待后期追加评论！\n",
       "4345                                    双十一买的，还没安装，但是很满意！\n",
       "4346            商品不错 送货也快 但是服务员售后真的不行 快递哥哥3天从西安送到南宁并且送进家门\n",
       "4347         到货速度很快，宝贝包装完好。虽然还没来得及安装使用，但因为有朋友正使用该款，质量有保证。\n",
       "4348                    还没拆开，但是包装的很好。第二次来买了，还送了赠品，nice。谢谢\n",
       "4349     价格实惠，快递也快，安装也快，虽然安装小师傅年轻是新手，但很耐心负责比较满意吧!保温效果很满意.\n",
       "4350                              虽然家里用不到 但商家的服务态度 超级棒！赞！\n",
       "4351                  已经安装上了，但没有试，我相信美的质量，应该不会有问题。先5分好评吧。\n",
       "4352    很不错，到货很快，当天给客服打电话，下午安装人员就上门了，速度啊!但但当时没有水，没有试试，...\n",
       "4353    买来放在出租房里的，所以自己也没试过，但是安装服务人员特别好，最大限度地给省钱，两套热水澡装...\n",
       "4354    买来放在出租房里的，所以自己也没试过，但是安装服务人员特别好，最大限度地给省钱，两套热水澡装...\n",
       "Name: 0, Length: 4355, dtype: object"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "neu[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(21088,)"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "combined = np.concatenate((pos[0], neu[0], neg[0]))\n",
    "combined.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(21088,)"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# pos -> 1; neu -> 0; neg -> -1\n",
    "y = np.concatenate((np.ones(len(pos), dtype=int), np.zeros(len(neu), dtype=int), -1*np.ones(len(neg),dtype=int)))\n",
    "y.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import jieba\n",
    "\n",
    "#对句子经行分词，并去掉换行符\n",
    "def tokenizer(text):\n",
    "    ''' Simple Parser converting each document to lower-case, then\n",
    "        removing the breaks for new lines and finally splitting on the\n",
    "        whitespace\n",
    "    '''\n",
    "    text = [jieba.lcut(document.replace('\\n', '')) for document in text]\n",
    "    return text\n",
    "\n",
    "combined = tokenizer(combined)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Training a Word2vec model...\n"
     ]
    }
   ],
   "source": [
    "from gensim.models.word2vec import Word2Vec\n",
    "from gensim.corpora.dictionary import Dictionary\n",
    "from keras.preprocessing import sequence\n",
    "import multiprocessing\n",
    "\n",
    "cpu_count = multiprocessing.cpu_count() # 4\n",
    "vocab_dim = 100\n",
    "n_iterations = 10  # ideally more..\n",
    "n_exposures = 10 # 所有频数超过10的词语\n",
    "window_size = 7\n",
    "n_epoch = 4\n",
    "input_length = 100\n",
    "maxlen = 100\n",
    "\n",
    "def create_dictionaries(model=None,\n",
    "                        combined=None):\n",
    "    ''' Function does are number of Jobs:\n",
    "        1- Creates a word to index mapping\n",
    "        2- Creates a word to vector mapping\n",
    "        3- Transforms the Training and Testing Dictionaries\n",
    "\n",
    "    '''\n",
    "    if (combined is not None) and (model is not None):\n",
    "        gensim_dict = Dictionary()\n",
    "        gensim_dict.doc2bow(model.vocab.keys(),\n",
    "                            allow_update=True)\n",
    "        #  freqxiao10->0 所以k+1\n",
    "        w2indx = {v: k+1 for k, v in gensim_dict.items()}#所有频数超过10的词语的索引,(k->v)=>(v->k)\n",
    "        w2vec = {word: model[word] for word in w2indx.keys()}#所有频数超过10的词语的词向量, (word->model(word))\n",
    "\n",
    "        def parse_dataset(combined): # 闭包-->临时使用\n",
    "            ''' Words become integers\n",
    "            '''\n",
    "            data=[]\n",
    "            for sentence in combined:\n",
    "                new_txt = []\n",
    "                for word in sentence:\n",
    "                    try:\n",
    "                        new_txt.append(w2indx[word])\n",
    "                    except:\n",
    "                        new_txt.append(0) # freqxiao10->0\n",
    "                data.append(new_txt)\n",
    "            return data # word=>index\n",
    "        combined=parse_dataset(combined)\n",
    "        combined= sequence.pad_sequences(combined, maxlen=maxlen)#每个句子所含词语对应的索引，所以句子中含有频数小于10的词语，索引为0\n",
    "        return w2indx, w2vec,combined\n",
    "    else:\n",
    "        print 'No data provided...'\n",
    "\n",
    "\n",
    "#创建词语字典，并返回每个词语的索引，词向量，以及每个句子所对应的词语索引\n",
    "def word2vec_train(combined):\n",
    "\n",
    "    model = Word2Vec(size=vocab_dim,\n",
    "                     min_count=n_exposures,\n",
    "                     window=window_size,\n",
    "                     workers=cpu_count,\n",
    "                     iter=n_iterations)\n",
    "    model.build_vocab(combined) # input: list\n",
    "    model.train(combined)\n",
    "    model.save('../model/Word2vec_model.pkl')\n",
    "    index_dict, word_vectors,combined = create_dictionaries(model=model,combined=combined)\n",
    "    return   index_dict, word_vectors,combined\n",
    "\n",
    "print 'Training a Word2vec model...'\n",
    "index_dict, word_vectors,combined=word2vec_train(combined)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Setting up Arrays for Keras Embedding Layer...\n",
      "x_train.shape and y_train.shape:\n",
      "(16870, 100) (16870, 3)\n",
      "Defining a Simple Keras Model...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/zcy/anaconda2/lib/python2.7/site-packages/ipykernel_launcher.py:38: UserWarning: Update your `LSTM` call to the Keras 2 API: `LSTM(units=50, activation=\"tanh\", recurrent_activation=\"hard_sigmoid\")`\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Compiling the Model...\n",
      "Train...\n",
      "Epoch 1/4\n",
      "16870/16870 [==============================] - 78s 5ms/step - loss: 0.9022 - acc: 0.6408\n",
      "Epoch 2/4\n",
      "16870/16870 [==============================] - 78s 5ms/step - loss: 0.7677 - acc: 0.7836\n",
      "Epoch 3/4\n",
      "16870/16870 [==============================] - 76s 4ms/step - loss: 0.6804 - acc: 0.8724\n",
      "Epoch 4/4\n",
      "16870/16870 [==============================] - 56s 3ms/step - loss: 0.6627 - acc: 0.8888\n",
      "Evaluate...\n",
      "4218/4218 [==============================] - 4s 947us/step\n",
      "Test score: [0.67400303481596235, 0.87624466577487403]\n"
     ]
    }
   ],
   "source": [
    "from sklearn.cross_validation import train_test_split\n",
    "from keras.models import Sequential\n",
    "from keras.layers.embeddings import Embedding\n",
    "from keras.layers.recurrent import LSTM\n",
    "from keras.layers.core import Dense, Dropout,Activation\n",
    "from keras.models import model_from_yaml\n",
    "np.random.seed(1337)  # For Reproducibility\n",
    "import sys\n",
    "sys.setrecursionlimit(1000000)\n",
    "import yaml\n",
    "import keras\n",
    "\n",
    "batch_size = 32\n",
    "\n",
    "\n",
    "def get_data(index_dict,word_vectors,combined,y):\n",
    "\n",
    "    n_symbols = len(index_dict) + 1  # 所有单词的索引数，频数小于10的词语索引为0，所以加1\n",
    "    embedding_weights = np.zeros((n_symbols, vocab_dim)) # 初始化 索引为0的词语，词向量全为0\n",
    "    for word, index in index_dict.items(): # 从索引为1的词语开始，对每个词语对应其词向量\n",
    "        embedding_weights[index, :] = word_vectors[word]\n",
    "    x_train, x_test, y_train, y_test = train_test_split(combined, y, test_size=0.2)\n",
    "    y_train = keras.utils.to_categorical(y_train,num_classes=3) \n",
    "    y_test = keras.utils.to_categorical(y_test,num_classes=3)\n",
    "    # print x_train.shape,y_train.shape\n",
    "    return n_symbols,embedding_weights,x_train,y_train,x_test,y_test\n",
    "\n",
    "\n",
    "##定义网络结构\n",
    "def train_lstm(n_symbols,embedding_weights,x_train,y_train,x_test,y_test):\n",
    "    print 'Defining a Simple Keras Model...'\n",
    "    model = Sequential()  # or Graph or whatever\n",
    "    model.add(Embedding(output_dim=vocab_dim,\n",
    "                        input_dim=n_symbols,\n",
    "                        mask_zero=True,\n",
    "                        weights=[embedding_weights],\n",
    "                        input_length=input_length))  # Adding Input Length\n",
    "    model.add(LSTM(output_dim=50, activation='tanh', inner_activation='hard_sigmoid'))\n",
    "    model.add(Dropout(0.5))\n",
    "    model.add(Dense(3, activation='softmax')) # Dense=>全连接层,输出维度=1\n",
    "    model.add(Activation('softmax'))\n",
    "\n",
    "    print 'Compiling the Model...'\n",
    "    model.compile(loss='categorical_crossentropy',\n",
    "                  optimizer='adam',metrics=['accuracy'])\n",
    "\n",
    "    print \"Train...\" # batch_size=32\n",
    "    model.fit(x_train, y_train, batch_size=batch_size, epochs=n_epoch,verbose=1)\n",
    "\n",
    "    print \"Evaluate...\"\n",
    "    score = model.evaluate(x_test, y_test,\n",
    "                                batch_size=batch_size)\n",
    "\n",
    "    yaml_string = model.to_yaml()\n",
    "    with open('../model/lstm.yml', 'w') as outfile:\n",
    "        outfile.write( yaml.dump(yaml_string, default_flow_style=True) )\n",
    "    model.save_weights('../model/lstm.h5')\n",
    "    print 'Test score:', score\n",
    "\n",
    "print 'Setting up Arrays for Keras Embedding Layer...'\n",
    "n_symbols,embedding_weights,x_train,y_train,x_test,y_test=get_data(index_dict, word_vectors,combined,y)\n",
    "print \"x_train.shape and y_train.shape:\"\n",
    "print x_train.shape,y_train.shape\n",
    "train_lstm(n_symbols,embedding_weights,x_train,y_train,x_test,y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "预测\n",
    "\"\"\"\n",
    "import jieba\n",
    "import numpy as np\n",
    "from gensim.models.word2vec import Word2Vec\n",
    "from gensim.corpora.dictionary import Dictionary\n",
    "from keras.preprocessing import sequence\n",
    "\n",
    "import yaml\n",
    "from keras.models import model_from_yaml\n",
    "np.random.seed(1337)  # For Reproducibility\n",
    "import sys\n",
    "sys.setrecursionlimit(1000000)\n",
    "\n",
    "# define parameters\n",
    "maxlen = 100\n",
    "\n",
    "def create_dictionaries(model=None,\n",
    "                        combined=None):\n",
    "    ''' Function does are number of Jobs:\n",
    "        1- Creates a word to index mapping\n",
    "        2- Creates a word to vector mapping\n",
    "        3- Transforms the Training and Testing Dictionaries\n",
    "\n",
    "    '''\n",
    "    if (combined is not None) and (model is not None):\n",
    "        gensim_dict = Dictionary()\n",
    "        gensim_dict.doc2bow(model.vocab.keys(),\n",
    "                            allow_update=True)\n",
    "        #  freqxiao10->0 所以k+1\n",
    "        w2indx = {v: k+1 for k, v in gensim_dict.items()}#所有频数超过10的词语的索引,(k->v)=>(v->k)\n",
    "        w2vec = {word: model[word] for word in w2indx.keys()}#所有频数超过10的词语的词向量, (word->model(word))\n",
    "\n",
    "        def parse_dataset(combined): # 闭包-->临时使用\n",
    "            ''' Words become integers\n",
    "            '''\n",
    "            data=[]\n",
    "            for sentence in combined:\n",
    "                new_txt = []\n",
    "                for word in sentence:\n",
    "                    try:\n",
    "                        new_txt.append(w2indx[word])\n",
    "                    except:\n",
    "                        new_txt.append(0) # freqxiao10->0\n",
    "                data.append(new_txt)\n",
    "            return data # word=>index\n",
    "        combined=parse_dataset(combined)\n",
    "        combined= sequence.pad_sequences(combined, maxlen=maxlen)#每个句子所含词语对应的索引，所以句子中含有频数小于10的词语，索引为0\n",
    "        return w2indx, w2vec,combined\n",
    "    else:\n",
    "        print 'No data provided...'\n",
    "\n",
    "\n",
    "def input_transform(string):\n",
    "    words=jieba.lcut(string)\n",
    "    words=np.array(words).reshape(1,-1)\n",
    "    model=Word2Vec.load('../model/Word2vec_model.pkl')\n",
    "    _,_,combined=create_dictionaries(model,words)\n",
    "    return combined\n",
    "\n",
    "\n",
    "def lstm_predict(string):\n",
    "    print 'loading model......'\n",
    "    with open('../model/lstm.yml', 'r') as f:\n",
    "        yaml_string = yaml.load(f)\n",
    "    model = model_from_yaml(yaml_string)\n",
    "\n",
    "    print 'loading weights......'\n",
    "    model.load_weights('../model/lstm.h5')\n",
    "    model.compile(loss='categorical_crossentropy',\n",
    "                  optimizer='adam',metrics=['accuracy'])\n",
    "    data=input_transform(string)\n",
    "    data.reshape(1,-1)\n",
    "    #print data\n",
    "    result=model.predict_classes(data)\n",
    "    print result # [[1]]\n",
    "    if result[0]==1:\n",
    "        print string,' positive'\n",
    "    elif result[0]==0:\n",
    "        print string,' neural'\n",
    "    else:\n",
    "        print string,' negative'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loading model......\n",
      "loading weights......\n",
      "[1]\n",
      "不错不错  positive\n"
     ]
    }
   ],
   "source": [
    "# string='酒店的环境非常好，价格也便宜，值得推荐'\n",
    "# string='手机质量太差了，傻逼店家，赚黑心钱，以后再也不会买了'\n",
    "# string = \"这是我看过文字写得很糟糕的书，因为买了，还是耐着性子看完了，但是总体来说不好，文字、内容、结构都不好\"\n",
    "# string = \"虽说是职场指导书，但是写的有点干涩，我读一半就看不下去了！\"\n",
    "# string = \"书的质量还好，但是内容实在没意思。本以为会侧重心理方面的分析，但实际上是婚外恋内容。\"\n",
    "# string = \"不是太好\"\n",
    "# string = \"不错不错\"\n",
    "string = \"非常好非常好！！\"\n",
    "# string = \"真的一般，没什么可以学习的\"\n",
    "\n",
    "lstm_predict(string)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
