{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Deep Transition Dependency Parser in PyTorch\n",
    "\n",
    "In this problem set, you will implement a deep transition dependency parser in [PyTorch](https://pytorch.org).  PyTorch is a popular deep learning framework providing a variety of components for constructing neural networks.  You will see how more complicated network architectures than simple feed-forward networks that you have learned in earlier classes can be used to solve a structured prediction problem."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import gtnlplib.parsing as parsing\n",
    "import gtnlplib.data_tools as data_tools\n",
    "import gtnlplib.constants as consts\n",
    "import gtnlplib.evaluation as evaluation\n",
    "import gtnlplib.utils as utils\n",
    "import gtnlplib.feat_extractors as feat_extractors\n",
    "import gtnlplib.neural_net as neural_net\n",
    "\n",
    "import torch\n",
    "import torch.optim as optim\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.autograd as ag\n",
    "\n",
    "from collections import defaultdict"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Read in the dataset\n",
    "dataset = data_tools.Dataset(consts.TRAIN_FILE, consts.DEV_FILE, consts.TEST_FILE)\n",
    "\n",
    "# Assign each word a unique index, including the two special tokens\n",
    "word_to_ix = { word: i for i, word in enumerate(dataset.vocab) }"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Some constants to keep around\n",
    "LSTM_NUM_LAYERS = 1\n",
    "TEST_EMBEDDING_DIM = 5\n",
    "WORD_EMBEDDING_DIM = 64\n",
    "STACK_EMBEDDING_DIM = 100\n",
    "NUM_FEATURES = 3\n",
    "\n",
    "# Hyperparameters\n",
    "ETA_0 = 0.01\n",
    "DROPOUT = 0.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def make_dummy_parser_state(sentence):\n",
    "    dummy_embeds = [ w + \"-EMBEDDING\" for w in sentence ] + [consts.END_OF_INPUT_TOK + \"-EMBEDDING\"]\n",
    "    return parsing.ParserState(sentence + [consts.END_OF_INPUT_TOK], dummy_embeds, utils.DummyCombiner())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# High-Level Overview of the Parser\n",
    "Be sure that you have reviewed the notes on transition-based dependency parsing, and are familiar with the relevant terminology. One small difference is that the text describes `arc-left` and `arc-right` actions, which create arcs between the top of the stack and the front of the buffer; in contrast, the parser you will implement here uses `reduce-left` and `reduce-right` actions, which create arcs between the top two items on the stack.\n",
    "\n",
    "Parsing will proceed as follows:\n",
    "* Initialize your parsing stack and input buffer.\n",
    "* At each step, extract some features.  These can be anything: words in the sentence, the configuration of the stack, the configuration of the input buffer, the previous action, etc.\n",
    "* Send these features through a multi-layer perceptron (MLP) to get a probability distribution over actions (SHIFT, REDUCE_L, REDUCE_R).  The next action you choose is the one with the highest probability.\n",
    "* If the action is either reduce left or reduce right, you use a neural network to combine the items being reduced and get a dense output to place back on the stack.\n",
    "\n",
    "The key classes you will fill in code for are\n",
    "* Feature extraction in `feat_extractors.py`\n",
    "* The `ParserState` class, which keeps track of the input buffer and parse stack, and offers a public interface for doing the parsing actions to update the state\n",
    "* The `TransitionParser` class, which is a PyTorch module where the core parsing logic resides in `parsing.py`.\n",
    "* The neural network components in `neural_net.py`\n",
    "\n",
    "The network components are compartmentalized as follows:\n",
    "* **Parsing**: `TransitionParser` is the base component that contains and coordinates the other substitutable components.\n",
    "* **Embedding Lookup**: You will implement two flavors of getting embeddings.  These embeddings are used to initialize the input buffer, and will be shifted on the stack / serve as inputs to the combiner networks (explained below).\n",
    "  - `VanillaWordEmbeddingLookup` just gets embeddings from a lookup table, one per word in the sentence.\n",
    "  - `BiLSTMWordEmbeddingLookup` is more fancy, running a sequence model in both directions over the sentence.  The hidden state at step t is the embedding for the t'th word of the sentence.  \n",
    "* **Action Choosing**: This is a simple multilayer perceptron (MLP) that outputs log probabilities over actions\n",
    "* **Combiners**: These networks take the two embeddings of the items being reduced, and combine them into a single embedding. You will create two version of this:\n",
    "  - `MLPCombinerNetwork` takes the two input embeddings and gives a dense output\n",
    "  - `LSTMCombinerNetwork` does a sequence model, where the output embedding is the hidden state of the next timestep."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Example\n",
    "\n",
    "The following is how the input buffer and stack look at each step of a parse, up to the first reduction.  The input sentence is \"the dog ran away\".  Our action chooser network takes the top two elements of the stack plus a one-token lookahead in the input buffer.  $C(x,y)$ refers to calling our combiner network on arguments $x, y$.  Also let $A$ be the set of actions: $\\{ \\text{SHIFT}, \\text{REDUCE-L}, \\text{REDUCE-R} \\}$, and let $q_w$ be the embedding for word $w$.\n",
    "\n",
    "### Step 1. \n",
    "  * Input Buffer: $\\left[ q_\\text{the}, q_\\text{dog}, q_\\text{ran}, q_\\text{away}, q_\\text{END-INPUT} \\right]$\n",
    "  * Stack: $\\left[ q_\\text{NULL-STACK}, q_\\text{NULL-STACK} \\right]$\n",
    "  * Action: $ \\text{argmax}_{a \\in A} \\ \\text{ActionChooser}(q_\\text{NULL-STACK}, q_\\text{NULL-STACK}, \\overbrace{q_\\text{the}}^\\text{lookahead}) \\Rightarrow \\text{SHIFT}$\n",
    "\n",
    "### Step 2\n",
    "  * Input Buffer: $\\left[ q_\\text{dog}, q_\\text{ran}, q_\\text{away}, q_\\text{END-INPUT} \\right]$\n",
    "  * Stack: $\\left[ q_\\text{NULL-STACK}, q_\\text{NULL-STACK}, q_\\text{the} \\right]$\n",
    "  * Action: $ \\text{argmax}_{a \\in A} \\ \\text{ActionChooser}(q_\\text{NULL-STACK}, q_\\text{the}, q_\\text{dog}) \\Rightarrow \\text{SHIFT}$\n",
    "\n",
    "### Step 3\n",
    "  * Input Buffer: $\\left[ q_\\text{ran}, q_\\text{away}, q_\\text{END-INPUT} \\right]$\n",
    "  * Stack: $\\left[ q_\\text{NULL-STACK}, q_\\text{NULL-STACK}, q_\\text{the}, q_\\text{dog} \\right]$\n",
    "  * Action: $ \\text{argmax}_{a \\in A} \\ \\text{ActionChooser}(q_\\text{the}, q_\\text{dog}, q_\\text{ran}) \\Rightarrow \\text{REDUCE-L}$\n",
    "\n",
    "### Step 4\n",
    "  * Input Buffer: $\\left[ q_\\text{ran}, q_\\text{away}, q_\\text{END-INPUT} \\right]$\n",
    "  * Stack: $\\left[ q_\\text{NULL-STACK}, q_\\text{NULL-STACK}, C(q_\\text{dog}, q_\\text{the}) \\right]$\n",
    "  \n",
    "For each word $w_m$, the parser keeps track of: the embedding $q_{w_m}$, the word itself $w_m$, and the word's position in the sentence $m$. The combination action should store the word and the index for the head word in the relation. The combined embedding may be a function of the embeddings for both the head and modifier words.\n",
    "\n",
    "Before beginning, I recommend completing the parse, drawing the input buffer and stack at each step, and explicity listing the arguments to the action chooser."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1. Managing and Updating the Parser State (1.5 points)\n",
    "\n",
    "In this part of the assignment, you will work with the ParserState class, that keeps track of the parsers input buffer and stack."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 1.1: Implementing Reduce (1 point)\n",
    "Implement the reduction operation of the ParserState in parsing.py, in the function \\_reduce.\n",
    "\n",
    "The way reduction is done is slightly different from the notes.  In the notes, reduction takes place between the top element of the stack and the first element of the input buffer.  Here, reduction takes place between the top two elements of the stack.\n",
    "\n",
    "At this step, there are no embeddings, but don't forget to make the call to the combiner network component.\n",
    "\n",
    "Hints:\n",
    "* Before starting, read the comments in \\_reduce, and look at the \\_\\_init\\_\\_ function of ParserState to see how it represents the stack and input buffer.\n",
    "* The `StackEntry` and `DepGraphEdge` tuples will be part of your solution, so take a look at how these are used elsewhere in the source.\n",
    "* In particular, you will want to push a new `StackEntry` onto the stack, and return a `DepGraphEdge`.\n",
    "* If you have trouble understanding the representation, print parser_state.stack or parser_state.input_buffer directly.  (If you just print parser_state, it will output a pretty-printed version)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Stack: []\n",
      "Input Buffer: ['They', 'can', 'fish', '<END-OF-INPUT>']\n",
      "\n",
      "Stack: ['They', 'can']\n",
      "Input Buffer: ['fish', '<END-OF-INPUT>']\n",
      "\n",
      "Reduction Made Edge: Head: ('can', 1), Modifier: ('They', 0) \n",
      "\n",
      "Stack: ['can']\n",
      "Input Buffer: ['fish', '<END-OF-INPUT>']\n",
      "\n"
     ]
    }
   ],
   "source": [
    "test_sentence = \"They can fish\".split()+[consts.END_OF_INPUT_TOK]\n",
    "parser_state = parsing.ParserState(test_sentence, [None] * len(test_sentence), utils.DummyCombiner())\n",
    "\n",
    "print parser_state\n",
    "\n",
    "parser_state.shift()\n",
    "parser_state.shift()\n",
    "print parser_state\n",
    "\n",
    "reduction = parser_state.reduce_left()\n",
    "print \"Reduction Made Edge: Head: {}, Modifier: {}\".format(reduction[0], reduction[1]), \"\\n\"\n",
    "print parser_state"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 1.2: Parser Terminating Condition (0.5 points)\n",
    "In this short (one line) deliverable, implement done_parsing() in ParserState.  Note\n",
    "we add an END_INPUT_TOKEN to the end of the sentence (this token could be a helpful feature).  Think about what the input buffer and stack look like at the end of a parse."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Stack: ['can']\n",
      "Input Buffer: ['fish', '<END-OF-INPUT>']\n",
      " False \n",
      "\n",
      "Stack: ['can', 'fish']\n",
      "Input Buffer: ['<END-OF-INPUT>']\n",
      " False \n",
      "\n",
      "Stack: ['can']\n",
      "Input Buffer: ['<END-OF-INPUT>']\n",
      " True \n",
      "\n"
     ]
    }
   ],
   "source": [
    "print parser_state, parser_state.done_parsing(),'\\n'\n",
    "parser_state.shift()\n",
    "print parser_state, parser_state.done_parsing(),'\\n'\n",
    "parser_state.reduce_right()\n",
    "print parser_state, parser_state.done_parsing(),'\\n'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2. Neural Network for Action Decisions (3.5 points)\n",
    "In this part of the assignment, you will use PyTorch to create a neural network which examines the current state of the parse and makes the decision to either shift, reduce left, or reduce right."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 2.1: Word Embedding Lookup (1 point)\n",
    "Implement the class `VanillaWordEmbeddingLookup` in `neural_net.py`.\n",
    "\n",
    "This involves adding code to the `__init__` and `forward` methods. \n",
    "- In the `__init__` method, you want make sure that instances of the class can store the embeddings\n",
    "- In the `forward` method, you should return a list of Torch variables, representing the looked up embeddings for each word in the sequence \n",
    "\n",
    "If you didn't do the tutorial, you will want to read the [docs](http://pytorch.org/docs/nn.html#embedding) on how to create a lookup table for your word embeddings.\n",
    "\n",
    "Hint: You will have to turn the input, which is a list of strings (the words in the sentence), into a format that your embedding lookup table can take, which is a torch.LongTensor.  So that we can automatically backprop, it is wrapped in a Variable.  utils.sequence_to_variable takes care of this for you."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<type 'list'>\n",
      "2 \n",
      "\n",
      "Embedding for William:\n",
      " Variable containing:\n",
      "-2.9718  1.7070 -0.4305 -2.2820  0.5237\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "torch.manual_seed(1) # DO NOT CHANGE\n",
    "reload(neural_net)\n",
    "test_sentence = \"William Faulkner\".split()\n",
    "test_word_to_ix = { \"William\": 0, \"Faulkner\": 1 }\n",
    "\n",
    "word_embedder = neural_net.VanillaWordEmbeddingLookup(test_word_to_ix, TEST_EMBEDDING_DIM)\n",
    "embeds = word_embedder(test_sentence)\n",
    "print type(embeds)\n",
    "print len(embeds), \"\\n\"\n",
    "print \"Embedding for William:\\n {}\".format(embeds[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 2.2: Feature Extraction (0.5 points)\n",
    "Fill in the SimpleFeatureExtractor class in feat_extractors.py to give the following 3 features\n",
    "* The embedding of the 2nd to top of the stack\n",
    "* The embedding of the top of the stack\n",
    "* The embedding of the next token in the input buffer (one-token lookahead)\n",
    "\n",
    "If at this point you have not poked around ParserState to see how it stores the state, now would be a good time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Embedding for 'The':\n",
      " Variable containing:\n",
      " 0.8407  0.5510  0.3863  0.9124 -0.8410\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n",
      "Embedding for 'Sound':\n",
      " Variable containing:\n",
      "-2.9718  1.7070 -0.4305 -2.2820  0.5237\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n",
      "Embedding for 'and' (from buffer lookahead):\n",
      " Variable containing:\n",
      " 0.0004 -1.2039  3.5283  0.4434  0.5848\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "torch.manual_seed(1)\n",
    "test_sentence = \"The Sound and the Fury\".split()\n",
    "test_word_to_ix = { word: i for i, word in enumerate(set(test_sentence)) }\n",
    "\n",
    "embedder = neural_net.VanillaWordEmbeddingLookup(test_word_to_ix, TEST_EMBEDDING_DIM)\n",
    "embeds = embedder(test_sentence)\n",
    "\n",
    "state = parsing.ParserState(test_sentence, embeds, utils.DummyCombiner())\n",
    "\n",
    "state.shift()\n",
    "state.shift()\n",
    "feat_extractor = feat_extractors.SimpleFeatureExtractor()\n",
    "feats = feat_extractor.get_features(state)\n",
    "\n",
    "print \"Embedding for 'The':\\n {}\".format(feats[0])\n",
    "print \"Embedding for 'Sound':\\n {}\".format(feats[1])\n",
    "print \"Embedding for 'and' (from buffer lookahead):\\n {}\".format(feats[2])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 2.3: MLP for Choosing Actions (1 point)\n",
    "\n",
    "Implement the class `neural_net.ActionChooserNetwork` according to the specification in `neural_net.py`.\n",
    "\n",
    "You will want to use the `utils.concat_and_flatten` function. We provide this function because the Tensor reshaping code can get somewhat terse. It takes the list of embeddings passed in (that come from your feature extractor) and concatenates them to one long row vector.\n",
    "\n",
    "This network takes as input the features from your feature extractor, concatenates them, runs them through an MLP and outputs log probabilities over actions.\n",
    "\n",
    "Hint:\n",
    "\n",
    "- http://pytorch.org/docs/nn.html#non-linear-activations\n",
    "- http://pytorch.org/docs/nn.html#linear-layers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Variable containing:\n",
      "-1.5347 -1.3445 -0.6466\n",
      "[torch.FloatTensor of size 1x3]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "torch.manual_seed(1) # DO NOT CHANGE, you can compare my output below to yours\n",
    "act_chooser = neural_net.ActionChooserNetwork(TEST_EMBEDDING_DIM * NUM_FEATURES)\n",
    "feats = [ ag.Variable(torch.randn(1, TEST_EMBEDDING_DIM)) for _ in xrange(NUM_FEATURES) ] # make some dummy feature embeddings\n",
    "log_probs = act_chooser(feats)\n",
    "print log_probs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 2.4: Network for Combining Stack Items (1 point)\n",
    "Implement the class `neural_net.MLPCombinerNetwork` according to the specification in `neural_net.py`.\n",
    "Again, `utils.concat_and_flatten` will come in handy.\n",
    "\n",
    "Recall that what this component does is take two embeddings, the head and modifier, during a reduction and output a combined embedding, which is then pushed back onto the stack during parsing."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Variable containing:\n",
      " 0.6063 -0.0110  0.6530 -0.6196 -0.1051\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "torch.manual_seed(1) # DO NOT CHANGE\n",
    "combiner = neural_net.MLPCombinerNetwork(TEST_EMBEDDING_DIM)\n",
    "\n",
    "# Again, make dummy inputs\n",
    "head_feat = ag.Variable(torch.randn(1, TEST_EMBEDDING_DIM))\n",
    "modifier_feat = ag.Variable(torch.randn(1, TEST_EMBEDDING_DIM))\n",
    "combined = combiner(head_feat, modifier_feat)\n",
    "print combined"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3. Return of the Parser (2 points)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 3.1: Parser Training Code (1.5 points)\n",
    "\n",
    "**Note:** There are two unit tests for this deliverable, one worth 1 point, one worth 0.5.\n",
    "\n",
    "You will implement the forward() function in gtnlplib.parsing.TransitionParser.\n",
    "It is important to understand the difference between the following tasks:\n",
    "\n",
    "* Training: Training the model involves passing it sentences along with the correct sequence of actions, and updating weights.\n",
    "* Evaluation: We can evaluate the parser by passing it sentences along with the correct sequence of actions, and see how many actions it predicts correctly.  This is identical to training, except the weights are not updated after making a prediction.\n",
    "* Prediction: After setting the weights, we give it a raw sentence (no gold-standard actions) and ask it for the correct dependency graph.\n",
    "\n",
    "At this point, it is necessary to have all of the components in place for constructing the parser.\n",
    "\n",
    "The parsing logic is roughly as follows:\n",
    "* Loop until parsing state is in its terminating state (deliverable 1.2)\n",
    "* Get the features from the parsing state (deliverable 2.1)\n",
    "* Send them through your action chooser network to get log probabilities over actions (deliverable 2.3)\n",
    "* If you have `gold_actions`, do that.  Otherwise (when predicting), take the argmax of your log probabilities and do that.\n",
    "  - Argmax is gross in PyTorch, so a function is provided for you in utils.argmax.\n",
    "  - While the gold actions will always be valid, if you are not provided gold actions, you must make sure\n",
    "    that any action you do is legal.  You cannot shift when the input buffer contains only `END_OF_INPUT_TOK` (this token should *NOT* be shifted onto the stack) and you cannot reduce when the stack contains fewer than 2 elements.\n",
    "    **If your network chooses `SHIFT` when it is not legal, just do `REDUCE_R`**\n",
    "\n",
    "Make sure to keep track of the things that the function wants to keep track of\n",
    "* Do all of your actions by calling the appropriate function on your `parser_state`\n",
    "* Append each output `Variable` from your `action_chooser` to the outputs list\n",
    "* Append each action you do to `actions_done`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "test_sentence = \"The man ran away\".split()\n",
    "test_word_to_ix = { word: i for i, word in enumerate(set(test_sentence)) }\n",
    "test_word_to_ix[consts.END_OF_INPUT_TOK] = len(test_word_to_ix)\n",
    "test_sentence_vocab = set(test_sentence)\n",
    "gold_actions = [\"SHIFT\", \"SHIFT\", \"REDUCE_L\", \"SHIFT\", \"REDUCE_L\", \"SHIFT\", \"REDUCE_R\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "feat_extractor = feat_extractors.SimpleFeatureExtractor()\n",
    "word_embedding_lookup = neural_net.VanillaWordEmbeddingLookup(test_word_to_ix, STACK_EMBEDDING_DIM)\n",
    "action_chooser = neural_net.ActionChooserNetwork(STACK_EMBEDDING_DIM * NUM_FEATURES)\n",
    "combiner_network = neural_net.MLPCombinerNetwork(STACK_EMBEDDING_DIM)\n",
    "parser = parsing.TransitionParser(feat_extractor, word_embedding_lookup,\n",
    "                                     action_chooser, combiner_network)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "set([DepGraphEdge(head=('ran', 2), modifier=('away', 3)), DepGraphEdge(head=('ran', 2), modifier=('man', 1)), DepGraphEdge(head=('<ROOT>', -1), modifier=('ran', 2)), DepGraphEdge(head=('man', 1), modifier=('The', 0))])\n",
      "[0, 0, 1, 0, 1, 0, 2]\n"
     ]
    }
   ],
   "source": [
    "output, depgraph, actions_done = parser(test_sentence, gold_actions)\n",
    "print depgraph\n",
    "print actions_done"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Now Train the Parser!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Training your parser may take some time. On the test below, I get about 5 seconds per loop (i7 6700k). \n",
    "\n",
    "- There are 10,000 training sentences, so multiply this measurement by 100 to get your training time.\n",
    "- One optimization trick is to that if you can do several things with a single PyTorch call, this will probably be faster than writing a PyTorch call that does one thing, and then calling it several times. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "torch.manual_seed(1)\n",
    "feat_extractor = feat_extractors.SimpleFeatureExtractor()\n",
    "word_embedding_lookup = neural_net.VanillaWordEmbeddingLookup(word_to_ix, STACK_EMBEDDING_DIM)\n",
    "action_chooser = neural_net.ActionChooserNetwork(STACK_EMBEDDING_DIM * NUM_FEATURES)\n",
    "combiner_network = neural_net.MLPCombinerNetwork(STACK_EMBEDDING_DIM)\n",
    "parser = parsing.TransitionParser(feat_extractor, word_embedding_lookup,\n",
    "                                     action_chooser, combiner_network)\n",
    "optimizer = optim.SGD(parser.parameters(), lr=ETA_0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.687275525911  Loss: 27.0280368376\n",
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.837609030272  Loss: 15.3356624376\n",
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.91713699333  Loss: 8.99367914472\n",
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.956900974859  Loss: 4.93015527138\n",
      "1 loop, best of 3: 3.96 s per loop\n"
     ]
    }
   ],
   "source": [
    "%%timeit\n",
    "parsing.train(dataset.training_data[:100], parser, optimizer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{DepGraphEdge(head=('<ROOT>', -1), modifier=('restrict', 4)),\n",
       " DepGraphEdge(head=('RTC', 6), modifier=('the', 5)),\n",
       " DepGraphEdge(head=('Treasury', 8), modifier=(',', 11)),\n",
       " DepGraphEdge(head=('Treasury', 8), modifier=('RTC', 6)),\n",
       " DepGraphEdge(head=('Treasury', 8), modifier=('only', 10)),\n",
       " DepGraphEdge(head=('Treasury', 8), modifier=('to', 7)),\n",
       " DepGraphEdge(head=('agency', 14), modifier=('the', 13)),\n",
       " DepGraphEdge(head=('authorization', 18), modifier=('congressional', 17)),\n",
       " DepGraphEdge(head=('authorization', 18), modifier=('specific', 16)),\n",
       " DepGraphEdge(head=('intends', 2), modifier=('The', 0)),\n",
       " DepGraphEdge(head=('intends', 2), modifier=('bill', 1)),\n",
       " DepGraphEdge(head=('only', 10), modifier=('borrowings', 9)),\n",
       " DepGraphEdge(head=('receives', 15), modifier=('agency', 14)),\n",
       " DepGraphEdge(head=('receives', 15), modifier=('authorization', 18)),\n",
       " DepGraphEdge(head=('restrict', 4), modifier=('.', 19)),\n",
       " DepGraphEdge(head=('restrict', 4), modifier=('intends', 2)),\n",
       " DepGraphEdge(head=('restrict', 4), modifier=('to', 3)),\n",
       " DepGraphEdge(head=('restrict', 4), modifier=('unless', 12)),\n",
       " DepGraphEdge(head=('unless', 12), modifier=('Treasury', 8)),\n",
       " DepGraphEdge(head=('unless', 12), modifier=('receives', 15))}"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# if this call doesn't work, something is wrong with your parser's behavior when gold labels aren't provided\n",
    "parser.predict(dataset.dev_data[0].sentence)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1\n",
      "Number of instances: 997    Number of network actions: 39025\n",
      "Acc: 0.825035233824  Loss: 17.4108023486\n",
      "Dev Evaluation\n",
      "Number of instances: 399    Number of network actions: 15719\n",
      "Acc: 0.823843755964  Loss: 16.9211852149\n",
      "F-Score: 0.500566790988\n",
      "Attachment Score: 0.486784960913\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# train the thing for a while here.\n",
    "# Shouldn't take too long, even on a laptop\n",
    "for epoch in xrange(1):\n",
    "    print \"Epoch {}\".format(epoch+1)\n",
    "    parsing.train(dataset.training_data[:1000], parser, optimizer, verbose=True)\n",
    "    \n",
    "    print \"Dev Evaluation\"\n",
    "    parsing.evaluate(dataset.dev_data, parser, verbose=True)\n",
    "    print \"F-Score: {}\".format(evaluation.compute_metric(parser, dataset.dev_data, evaluation.fscore))\n",
    "    print \"Attachment Score: {}\".format(evaluation.compute_attachment(parser, dataset.dev_data))\n",
    "    print \"\\n\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 3.2: Test Data Predictions (0.5 points 4650, 0.25 points 7650)\n",
    "Run the code below to output your predictions on the test data and dev data.  You can run the dev test to verify you are correct up to this point.  The test data evaluation is for us."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "dev_sentences = [ sentence for sentence, _ in dataset.dev_data ]\n",
    "evaluation.output_preds(consts.D3_2_DEV_FILENAME, parser, dev_sentences)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "evaluation.output_preds(consts.D3_2_TEST_FILENAME, parser, dataset.test_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 4. Evaluation and Training Improvements (3 points)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 4.1: Better Word Embeddings (1 point 4650, 0.5 points 7650)\n",
    "Implement the class BiLSTMWordEmbeddingLookup in neural_net.py.\n",
    "This class can replace your VanillaWordEmbeddingLookup.\n",
    "This class implements a sequence model over the sentence, where the t'th word's embedding is the hidden state at timestep t.\n",
    "This means that, rather than have our embeddings on the stack only include the semantics of a single word, our embeddings will contain information from all parts of the sentence (the LSTM will, in principle, learn what information is relevant)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<type 'list'>\n",
      "2 \n",
      "\n",
      "Embedding for Michael:\n",
      " Variable containing:\n",
      "\n",
      "Columns 0 to 9 \n",
      "-0.0134 -0.0766 -0.0746  0.0530 -0.0202  0.1845 -0.1455 -0.0734 -0.0072  0.0781\n",
      "\n",
      "Columns 10 to 19 \n",
      " 0.0354 -0.0723  0.0160  0.0915 -0.0200  0.1126  0.1395  0.0041  0.0919  0.0251\n",
      "\n",
      "Columns 20 to 29 \n",
      " 0.3126  0.0233  0.1408  0.1407 -0.2879 -0.1591 -0.0579  0.0207  0.0364 -0.3148\n",
      "\n",
      "Columns 30 to 39 \n",
      "-0.4017  0.1126  0.2589  0.0505 -0.1529 -0.0149  0.0705  0.0419 -0.1842  0.1084\n",
      "\n",
      "Columns 40 to 49 \n",
      "-0.1632 -0.0252 -0.0965 -0.0090  0.1427  0.1717  0.1267 -0.0724  0.3383 -0.0991\n",
      "\n",
      "Columns 50 to 59 \n",
      " 0.2505 -0.1585 -0.0338  0.2543  0.1364  0.1747 -0.0128  0.0472 -0.0284 -0.1095\n",
      "\n",
      "Columns 60 to 69 \n",
      "-0.2905  0.1631  0.0890  0.1824  0.0406  0.0039 -0.0506 -0.0266  0.0073  0.1715\n",
      "\n",
      "Columns 70 to 79 \n",
      " 0.0092 -0.3738 -0.0689  0.0460  0.1567 -0.0565  0.1381  0.0503 -0.0933  0.1842\n",
      "\n",
      "Columns 80 to 89 \n",
      "-0.0477  0.1206  0.0543  0.0678 -0.0886  0.0467 -0.2502  0.0426 -0.0566 -0.0431\n",
      "\n",
      "Columns 90 to 99 \n",
      " 0.0637 -0.0667  0.0312 -0.1330 -0.1285 -0.0477  0.0292 -0.1092 -0.0594  0.0528\n",
      "[torch.FloatTensor of size 1x100]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "torch.manual_seed(1) # DO NOT CHANGE\n",
    "test_sentence = \"Michael Collins\".split()\n",
    "test_word_to_ix = { \"Michael\": 0, \"Collins\": 1 }\n",
    "\n",
    "lstm_word_embedder = neural_net.BiLSTMWordEmbeddingLookup(test_word_to_ix,\n",
    "                                                          WORD_EMBEDDING_DIM,\n",
    "                                                          STACK_EMBEDDING_DIM,\n",
    "                                                          num_layers=LSTM_NUM_LAYERS,\n",
    "                                                          dropout=DROPOUT)\n",
    "    \n",
    "lstm_embeds = lstm_word_embedder(test_sentence)\n",
    "print type(lstm_embeds)\n",
    "print len(lstm_embeds), \"\\n\"\n",
    "print \"Embedding for Michael:\\n {}\".format(lstm_embeds[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 4.2: Pretrained Embeddings (0.5 points)\n",
    "\n",
    "Fill in the function `initialize_with_pretrained` in `utils.py`.\n",
    "\n",
    "It will take a word embedding lookup component and initialize its lookup table with pretrained embeddings.\n",
    "\n",
    "Note that you can create a Torch variable from a list of floats using `torch.Tensor()`. Googling for more information about how Torch stores parameters is allowed, I don't think you'll find the exact answer online (corollary: do not post the answer online)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0.12429751455783844, -0.11472601443529129, -0.5684014558792114, -0.396965891122818, 0.22938089072704315]\n"
     ]
    }
   ],
   "source": [
    "import cPickle\n",
    "pretrained_embeds = cPickle.load(open(consts.PRETRAINED_EMBEDS_FILE))\n",
    "print pretrained_embeds['four'][:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "embedder = neural_net.VanillaWordEmbeddingLookup(word_to_ix,64)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Variable containing:\n",
       " 0.6730\n",
       "-1.7911\n",
       " 1.4701\n",
       " 1.5589\n",
       "-2.0735\n",
       "[torch.FloatTensor of size 5]"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embedder.forward(['four'])[0][0,:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Variable containing:\n",
      " 0.1243\n",
      "-0.1147\n",
      "-0.5684\n",
      "-0.3970\n",
      " 0.2294\n",
      "[torch.FloatTensor of size 5]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "reload(utils);\n",
    "utils.initialize_with_pretrained(pretrained_embeds,embedder)\n",
    "print embedder.forward(['four'])[0][0,:5]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 4.3: Better Reduction Combination (1 point)\n",
    "\n",
    "Before, in order to combine two embeddings during a reduction, we just passed them through an MLP and got a dense output.  Now, we will instead use a sequence model of the stack.  The combined embedding from a reduction is the next time step of an LSTM.  Implement `LSTMCombinerNetwork` in `neural_network.py`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "reload(neural_net);\n",
    "TEST_EMBEDDING_DIM = 5\n",
    "combiner = neural_net.LSTMCombinerNetwork(TEST_EMBEDDING_DIM, 1, 0.0)\n",
    "head_feat = ag.Variable(torch.randn(1, TEST_EMBEDDING_DIM))\n",
    "modifier_feat = ag.Variable(torch.randn(1, TEST_EMBEDDING_DIM))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Variable containing:\n",
       "(0 ,.,.) = \n",
       "\n",
       "Columns 0 to 8 \n",
       "  -0.7641 -0.2784  0.6002  0.0032 -1.3923 -0.5975  0.2761  1.4585  1.2168\n",
       "\n",
       "Columns 9 to 9 \n",
       "  -0.0510\n",
       "[torch.FloatTensor of size 1x1x10]"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "utils.concat_and_flatten([head_feat,modifier_feat]).view(1,1,-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Variable containing:\n",
      " 0.2730  0.0426 -0.1535  0.0089  0.0604\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n",
      "Variable containing:\n",
      " 0.3838  0.0589 -0.1849  0.0171  0.0936\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n",
      "Variable containing:\n",
      " 0.4329  0.0679 -0.1930  0.0236  0.1142\n",
      "[torch.FloatTensor of size 1x5]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# note that the output keeps changing, because of the recurrent update\n",
    "for _ in xrange(3):\n",
    "    print combiner(head_feat,modifier_feat)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Retrain with the new components\n",
    "\n",
    "The code below retrains your parser using all the new components that you just wrote."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "feat_extractor = feat_extractors.SimpleFeatureExtractor()\n",
    "\n",
    "# BiLSTM over word embeddings\n",
    "word_embedding_lookup = neural_net.BiLSTMWordEmbeddingLookup(word_to_ix,\n",
    "                                                             WORD_EMBEDDING_DIM,\n",
    "                                                             STACK_EMBEDDING_DIM,\n",
    "                                                             num_layers=LSTM_NUM_LAYERS,\n",
    "                                                             dropout=DROPOUT)\n",
    "# pretrained inputs\n",
    "utils.initialize_with_pretrained(pretrained_embeds, word_embedding_lookup)\n",
    "\n",
    "action_chooser = neural_net.ActionChooserNetwork(STACK_EMBEDDING_DIM * NUM_FEATURES)\n",
    "\n",
    "# LSTM reduction operations\n",
    "combiner = neural_net.LSTMCombinerNetwork(STACK_EMBEDDING_DIM,\n",
    "                                          num_layers=LSTM_NUM_LAYERS,\n",
    "                                          dropout=DROPOUT)\n",
    "\n",
    "parser = parsing.TransitionParser(feat_extractor, word_embedding_lookup,\n",
    "                                  action_chooser, combiner)\n",
    "\n",
    "optimizer = optim.SGD(parser.parameters(), lr=ETA_0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.668291431503  Loss: 28.3679159951\n",
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.806824012314  Loss: 18.1205399126\n",
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.855310415598  Loss: 13.5849320753\n",
      "Number of instances: 100    Number of network actions: 3898\n",
      "Acc: 0.880708055413  Loss: 10.9105038324\n",
      "1 loop, best of 3: 3.9 s per loop\n"
     ]
    }
   ],
   "source": [
    "%%timeit\n",
    "# The LSTMs will make this take longer\n",
    "parsing.train(dataset.training_data[:100], parser, optimizer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1\n",
      "Number of instances: 997    Number of network actions: 39025\n",
      "Acc: 0.896809737348  Loss: 10.2798197525\n",
      "Dev Evaluation\n",
      "Number of instances: 399    Number of network actions: 15719\n",
      "Acc: 0.898403206311  Loss: 9.99048229483\n",
      "F-Score: 0.698743236193\n",
      "Attachment Score: 0.691897257724\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "for epoch in xrange(1):\n",
    "    print \"Epoch {}\".format(epoch+1)\n",
    "    \n",
    "    parser.train() # turn on dropout layers if they are there\n",
    "    parsing.train(dataset.training_data[:1000], parser, optimizer, verbose=True)\n",
    "    \n",
    "    print \"Dev Evaluation\"\n",
    "    parser.eval() # turn them off for evaluation\n",
    "    parsing.evaluate(dataset.dev_data, parser, verbose=True)\n",
    "    print \"F-Score: {}\".format(evaluation.compute_metric(parser, dataset.dev_data, evaluation.fscore))\n",
    "    print \"Attachment Score: {}\".format(evaluation.compute_attachment(parser, dataset.dev_data))\n",
    "    print \"\\n\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Deliverable 4.4: Test Predictions (0.5 points 4650, 0.25 points 7650)\n",
    "\n",
    "Run the code below to generate test predictions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "dev_sentences = [ sentence for sentence, _ in dataset.dev_data ]\n",
    "evaluation.output_preds(consts.D4_4_DEV_FILENAME, parser, dev_sentences)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "evaluation.output_preds(consts.D4_4_TEST_FILENAME, parser, dataset.test_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 5. Bakeoff!\n",
    "\n",
    "**Bakeoff Link**: Please click [here](https://kaggle.com/join/deepdependencyparsinggtnlp) to join the contest.\n",
    "\n",
    "Try to implement new features and tune your network's architecture and hyper parameters to get the best network.\n",
    "Section 3 of [this paper](https://pdfs.semanticscholar.org/55b8/1991fbb025038d98e8c71acf7dc2b78ee5e9.pdfhttps://pdfs.semanticscholar.org/55b8/1991fbb025038d98e8c71acf7dc2b78ee5e9.pdf) may help out with hyper parameter tuning if you are new to neural networks.\n",
    "To get very competitive, it may be necessary to train for a large amount of time (leaving it running overnight should be fine).  Here are some suggestions.\n",
    "* Try customizing any of the 3 components (word embeddings, action choosing, combining) in clever ways.  You can create new classes that expose the same public interface and use them here (just leave your required ones untouched).\n",
    "* Try new features.  Write new classes that expose the same public interface as SimpleFeatureExtractor.  Try looking further into stack history, or more input buffer lookahead, or features based on the action sequence.  The possibilities are endless.\n",
    "* Tune your hyperparameters.  Learning rate is the most important one.\n",
    "* Try different optimizers.  torch.optim has a ton of different training algorithms.  SGD was used in this pset because it is fast, but it is the most vanilla of them.  Trying new ones will almost certainly boost performance\n",
    "* Try adding regularization to your network if you see evidence that it is overfitting\n",
    "* Check out [this book](http://www.deeplearningbook.org/), which is undoubtedly the best deep learning book (and it is free online!) which has great information on regularization, optimization, and different network architectures.\n",
    "\n",
    "From just picking good hyperparameters, I can get near state-of-the-art results with the components you have built.\n",
    "\n",
    "**Extra credit**: \n",
    "- +0.3 if you beat the best TA/prof system.\n",
    "- +0.2 if you are #1 in CS4650\n",
    "- +0.2 if you are #1 in CS7650"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 6. 7650 only: comparing hyperparameters (1 point)\n",
    "\n",
    "Do a systematic comparison of one hyperparameter: could be input embedding size, stack embedding size, learning rate, dropout, or something you added for the bakeoff. Try at least five different values, using either your system from 4.4 or from the bakeoff. Explain what you tried and what you found in text-answers.md."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using Cuda\n",
    "You can use CUDA to train your network, and you should expect significant speedup if you have a decent GPU and the CUDA toolkit installed.\n",
    "If you want to use CUDA in this assignment, change the HAVE_CUDA variable to True in constants.py, and uncomment the .to_cuda() and .to_cpu() lines below.\n",
    "\n",
    "We are not officially supporting CUDA though.  If you have problems installing or running CUDA, please just use the CPU, we cannot help you debug it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Set your hyperparameters here\n",
    "# e.g learning rate, regularization, lr annealing, dimensionality of embeddings, number of epochs, early stopping etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Make whichever components you want to use.  You can create your own if you have ideas for improvement.\n",
    "# Also, choose an optimizer.\n",
    "# name your TransitionParser bakeoff_parser to output your predictions below\n",
    "# bakeoff_parser = TransitionParser(...)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# train for bakeoff\n",
    "for epoch in xrange(NUM_EPOCHS):\n",
    "    print \"Epoch {}\".format(epoch+1)\n",
    "    \n",
    "    #parser.to_cuda() # uncomment to train on your GPU\n",
    "    parser.train() # turn on dropout layers if they are there\n",
    "    \n",
    "    # train on full training data\n",
    "    parsing.train(dataset.training_data, parser, optimizer, verbose=True)\n",
    "    \n",
    "    \n",
    "    print \"Dev Evaluation\"\n",
    "    #parser.to_cpu() #TODO fix evaluation so you dont have to ship everything back to the CPU\n",
    "    parser.eval() # turn them off for evaluation\n",
    "    parsing.evaluate(dataset.dev_data, parser, verbose=True)\n",
    "    print \"F-Score: {}\".format(evaluation.compute_metric(parser, dataset.dev_data, evaluation.fscore))\n",
    "    print \"Attachment Score: {}\".format(evaluation.compute_attachment(parser, dataset.dev_data))\n",
    "    print \"\\n\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "evaluation.output_preds(\"bakeoff-test.preds\", parser, dataset.test_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "evaluation.kaggle_output(\"KAGGLE-bakeoff-preds.csv\", parser, dataset.test_data)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
