{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Date Converter\n",
    "\n",
    "We will be translating from one date format to another. In order to do this we need to connect two set of LSTMs (RNNs). The diagram looks as follows: Each set respectively sharing weights (i.e. each of the 4 green cells have the same weights and similarly with the blue cells). The first is a many to one LSTM, which summarises the question at the last hidden layer (and cell memory).\n",
    "\n",
    "The second set (blue) is a Many to Many LSTM which has different weights to the first set of LSTMs. The input is simply the answer sentence while the output is the same sentence shifted by one. Ofcourse during testing time there are no inputs for the `answer` and is only used during training.\n",
    "![seq2seq_diagram](https://i.stack.imgur.com/YjlBt.png) \n",
    "\n",
    "**20th January 2017 => 20th January 2009**\n",
    "![troll](../images/troll_face.png)\n",
    "\n",
    "## References:\n",
    "1. Plotting Tensorflow graph: https://stackoverflow.com/questions/38189119/simple-way-to-visualize-a-tensorflow-graph-in-jupyter/38192374#38192374\n",
    "2. The generation process was taken from: https://github.com/datalogue/keras-attention/blob/master/data/generate.py\n",
    "3. 2014 paper with 2000+ citations: https://arxiv.org/pdf/1409.3215.pdf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import HTML\n",
    "HTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/_Sm0q_FckM8?rel=0&amp;controls=0&amp;showinfo=0\" frameborder=\"0\" allowfullscreen></iframe>')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install faker babel"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import YouTubeVideo\n",
    "YouTubeVideo(\"_Sm0q_FckM8\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/jannisborn/anaconda3/envs/tf/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
      "  from ._conv import register_converters as _register_converters\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "\n",
    "import random\n",
    "import json\n",
    "import os\n",
    "import time\n",
    "\n",
    "from faker import Faker\n",
    "import babel\n",
    "from babel.dates import format_date\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "import tensorflow.contrib.legacy_seq2seq as seq2seq\n",
    "from utilities import show_graph\n",
    "\n",
    "from sklearn.model_selection import train_test_split"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "fake = Faker()\n",
    "fake.seed(42)\n",
    "random.seed(42)\n",
    "\n",
    "FORMATS = ['short',\n",
    "           'medium',\n",
    "           'long',\n",
    "           'full',\n",
    "           'd MMM YYY',\n",
    "           'd MMMM YYY',\n",
    "           'dd MMM YYY',\n",
    "           'd MMM, YYY',\n",
    "           'd MMMM, YYY',\n",
    "           'dd, MMM YYY',\n",
    "           'd MM YY',\n",
    "           'd MMMM YYY',\n",
    "           'MMMM d YYY',\n",
    "           'MMMM d, YYY',\n",
    "           'dd.MM.YY',\n",
    "           ]\n",
    "\n",
    "# change this if you want it to work with only a single language\n",
    "LOCALES = babel.localedata.locale_identifiers()\n",
    "LOCALES = [lang for lang in LOCALES if 'en' in str(lang)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_date():\n",
    "    \"\"\"\n",
    "        Creates some fake dates \n",
    "        :returns: tuple containing \n",
    "                  1. human formatted string\n",
    "                  2. machine formatted string\n",
    "                  3. date object.\n",
    "    \"\"\"\n",
    "    dt = fake.date_object()\n",
    "\n",
    "    # wrapping this in a try catch because\n",
    "    # the locale 'vo' and format 'full' will fail\n",
    "    try:\n",
    "        human = format_date(dt,\n",
    "                            format=random.choice(FORMATS),\n",
    "                            locale=random.choice(LOCALES))\n",
    "\n",
    "        case_change = random.randint(0,3) # 1/2 chance of case change\n",
    "        if case_change == 1:\n",
    "            human = human.upper()\n",
    "        elif case_change == 2:\n",
    "            human = human.lower()\n",
    "\n",
    "        machine = dt.isoformat()\n",
    "    except AttributeError as e:\n",
    "        return None, None, None\n",
    "\n",
    "    return human, machine #, dt\n",
    "\n",
    "data = [create_date() for _ in range(50000)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "See below what we are trying to do in this lesson. We are taking dates of various formats and converting them into a standard date format:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('7 07 13', '2013-07-07'),\n",
       " ('30 JULY 1977', '1977-07-30'),\n",
       " ('Tuesday, 14 September 1971', '1971-09-14'),\n",
       " ('18 09 88', '1988-09-18'),\n",
       " ('31, Aug 1986', '1986-08-31')]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data[:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = [x for x, y in data]\n",
    "y = [y for x, y in data]\n",
    "\n",
    "u_characters = set(' '.join(x))\n",
    "char2numX = dict(zip(u_characters, range(len(u_characters))))\n",
    "\n",
    "u_characters = set(' '.join(y))\n",
    "char2numY = dict(zip(u_characters, range(len(u_characters))))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pad all sequences that are shorter than the max length of the sequence"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD><PAD>31, Aug 1986\n"
     ]
    }
   ],
   "source": [
    "char2numX['<PAD>'] = len(char2numX)\n",
    "num2charX = dict(zip(char2numX.values(), char2numX.keys()))\n",
    "max_len = max([len(date) for date in x])\n",
    "\n",
    "x = [[char2numX['<PAD>']]*(max_len - len(date)) +[char2numX[x_] for x_ in date] for date in x]\n",
    "print(''.join([num2charX[x_] for x_ in x[4]]))\n",
    "x = np.array(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<GO>1986-08-31\n"
     ]
    }
   ],
   "source": [
    "char2numY['<GO>'] = len(char2numY)\n",
    "num2charY = dict(zip(char2numY.values(), char2numY.keys()))\n",
    "\n",
    "y = [[char2numY['<GO>']] + [char2numY[y_] for y_ in date] for date in y]\n",
    "print(''.join([num2charY[y_] for y_ in y[4]]))\n",
    "y = np.array(y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "x_seq_length = len(x[0])\n",
    "y_seq_length = len(y[0])- 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def batch_data(x, y, batch_size):\n",
    "    shuffle = np.random.permutation(len(x))\n",
    "    start = 0\n",
    "#     from IPython.core.debugger import Tracer; Tracer()()\n",
    "    x = x[shuffle]\n",
    "    y = y[shuffle]\n",
    "    while start + batch_size <= len(x):\n",
    "        yield x[start:start+batch_size], y[start:start+batch_size]\n",
    "        start += batch_size"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "epochs = 2\n",
    "batch_size = 128\n",
    "nodes = 32\n",
    "embed_size = 10\n",
    "bidirectional = True\n",
    "\n",
    "tf.reset_default_graph()\n",
    "sess = tf.InteractiveSession()\n",
    "\n",
    "# Tensor where we will feed the data into graph\n",
    "inputs = tf.placeholder(tf.int32, (None, x_seq_length), 'inputs')\n",
    "outputs = tf.placeholder(tf.int32, (None, None), 'output')\n",
    "targets = tf.placeholder(tf.int32, (None, None), 'targets')\n",
    "\n",
    "# Embedding layers\n",
    "input_embedding = tf.Variable(tf.random_uniform((len(char2numX), embed_size), -1.0, 1.0), name='enc_embedding')\n",
    "output_embedding = tf.Variable(tf.random_uniform((len(char2numY), embed_size), -1.0, 1.0), name='dec_embedding')\n",
    "date_input_embed = tf.nn.embedding_lookup(input_embedding, inputs)\n",
    "date_output_embed = tf.nn.embedding_lookup(output_embedding, outputs)\n",
    "\n",
    "with tf.variable_scope(\"encoding\") as encoding_scope:\n",
    "\n",
    "    if not bidirectional:\n",
    "        \n",
    "        # Regular approach with LSTM units\n",
    "        lstm_enc = tf.contrib.rnn.LSTMCell(nodes)\n",
    "        _, last_state = tf.nn.dynamic_rnn(lstm_enc, inputs=date_input_embed, dtype=tf.float32)\n",
    "\n",
    "    else:\n",
    "        \n",
    "        # Using a bidirectional LSTM architecture instead\n",
    "        enc_fw_cell = tf.contrib.rnn.LSTMCell(nodes)\n",
    "        enc_bw_cell = tf.contrib.rnn.LSTMCell(nodes)\n",
    "\n",
    "        ((enc_fw_out, enc_bw_out) , (enc_fw_final, enc_bw_final)) = tf.nn.bidirectional_dynamic_rnn(cell_fw=enc_fw_cell,\n",
    "                                                        cell_bw=enc_bw_cell, inputs=date_input_embed, dtype=tf.float32)\n",
    "        enc_fin_c = tf.concat((enc_fw_final.c , enc_bw_final.c),1)\n",
    "        enc_fin_h = tf.concat((enc_fw_final.h , enc_bw_final.h),1)\n",
    "        last_state = tf.contrib.rnn.LSTMStateTuple(c=enc_fin_c , h=enc_fin_h)\n",
    "    \n",
    "    \n",
    "with tf.variable_scope(\"decoding\") as decoding_scope:\n",
    "    \n",
    "    if not bidirectional:      \n",
    "        lstm_dec = tf.contrib.rnn.LSTMCell(nodes)    \n",
    "    else:\n",
    "        lstm_dec = tf.contrib.rnn.LSTMCell(2*nodes)\n",
    "    \n",
    "    dec_outputs, _ = tf.nn.dynamic_rnn(lstm_dec, inputs=date_output_embed, initial_state=last_state)\n",
    "\n",
    "        \n",
    "\n",
    "logits = tf.layers.dense(dec_outputs, units=len(char2numY), use_bias=True) \n",
    "    \n",
    "    \n",
    "#connect outputs to \n",
    "with tf.name_scope(\"optimization\"):\n",
    "    # Loss function\n",
    "    loss = tf.contrib.seq2seq.sequence_loss(logits, targets, tf.ones([batch_size, y_seq_length]))\n",
    "    # Optimizer\n",
    "    optimizer = tf.train.RMSPropOptimizer(1e-3).minimize(loss)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[None, None, 64]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dec_outputs.get_shape().as_list()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[None, 64]"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "last_state[0].get_shape().as_list()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[None, 29]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "inputs.get_shape().as_list()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[None, 29, 10]"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "date_input_embed.get_shape().as_list()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train the graph above:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "        <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"\n",
       "        <script>\n",
       "          function load() {\n",
       "            document.getElementById(&quot;graph0.553986137258213&quot;).pbtxt = 'node {\\n  name: &quot;inputs&quot;\\n  op: &quot;Placeholder&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 29\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;output&quot;\\n  op: &quot;Placeholder&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: -1\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;targets&quot;\\n  op: &quot;Placeholder&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: -1\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;<\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform/min&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: -1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform/max&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform/RandomUniform&quot;\\n  op: &quot;RandomUniform&quot;\\n  input: &quot;random_uniform/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;seed&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;seed2&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform/sub&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;random_uniform/max&quot;\\n  input: &quot;random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;random_uniform/RandomUniform&quot;\\n  input: &quot;random_uniform/sub&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;random_uniform/mul&quot;\\n  input: &quot;random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 60\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;enc_embedding&quot;\\n  input: &quot;random_uniform&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;enc_embedding&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform_1/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\r\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform_1/min&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: -1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform_1/max&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform_1/RandomUniform&quot;\\n  op: &quot;RandomUniform&quot;\\n  input: &quot;random_uniform_1/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;seed&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;seed2&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform_1/sub&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;random_uniform_1/max&quot;\\n  input: &quot;random_uniform_1/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform_1/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;random_uniform_1/RandomUniform&quot;\\n  input: &quot;random_uniform_1/sub&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;random_uniform_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;random_uniform_1/mul&quot;\\n  input: &quot;random_uniform_1/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 13\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dec_embedding&quot;\\n  input: &quot;random_uniform_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dec_embedding&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;embedding_lookup&quot;\\n  op: &quot;Gather&quot;\\n  input: &quot;enc_embedding/read&quot;\\n  input: &quot;inputs&quot;\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tparams&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_indices&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;embedding_lookup_1&quot;\\n  op: &quot;Gather&quot;\\n  input: &quot;dec_embedding/read&quot;\\n  input: &quot;output&quot;\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tparams&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_indices&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Rank&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/range/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Rank&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat/values_0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\001\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat/values_0&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/range&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;embedding_lookup&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/Const&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_1/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_1&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_1/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_2/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_2&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_2/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/Const_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat_1/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat_1&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/Const_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat_1/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_1&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/concat_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_1/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_3/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_3&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/ExpandDims_3/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/Const_3&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Shape_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Shape_2&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Shape_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat_1/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat_1&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Const&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat_1/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/time&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n  op: &quot;TensorArrayV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;clear_after_read&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;dynamic_size&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 32\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;identical_element_shapes&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;tensor_array_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/dynamic_rnn/output_0&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray_1&quot;\\n  op: &quot;TensorArrayV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;clear_after_read&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;dynamic_size&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;identical_element_shapes&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;tensor_array_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/dynamic_rnn/input_0&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/range/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  op: &quot;TensorArrayScatterV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/range&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/transpose&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray_1:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/transpose&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Maximum/x&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Maximum&quot;\\n  op: &quot;Maximum&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Maximum/x&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Minimum&quot;\\n  op: &quot;Minimum&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Maximum&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/iteration_counter&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/iteration_counter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/time&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_2&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_3&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_4&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_1&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_2&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_3&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_3&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_4&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Enter_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_4&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Less&quot;\\n  op: &quot;Less&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Less/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Less/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Less_1&quot;\\n  op: &quot;Less&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Less_1/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Less_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Minimum&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/LogicalAnd&quot;\\n  op: &quot;LogicalAnd&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Less&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Less_1&quot;\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/LoopCond&quot;\\n  op: &quot;LoopCond&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/LogicalAnd&quot;\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Merge&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_1&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Merge_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_2&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Merge_2&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_3&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Merge_3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_4&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Merge_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Merge_4&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_1:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_2&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_2:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_3&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_3:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_4&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_4:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/add/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3&quot;\\n  op: &quot;TensorArrayReadV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;*\\\\000\\\\000\\\\000\\\\200\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: -0.18786728382110596\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/max&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.18786728382110596\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  op: &quot;RandomUniform&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;seed&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;seed2&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/sub&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/max&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/sub&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 42\\n        }\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/Initializer/random_uniform&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 128\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd&quot;\\n  op: &quot;BiasAdd&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 4\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split/split_dim&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split&quot;\\n  op: &quot;Split&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split/split_dim&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;num_split&quot;\\n    value {\\n      i: 4\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split:2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh&quot;\\n  op: &quot;Tanh&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split:3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1&quot;\\n  op: &quot;Tanh&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  op: &quot;TensorArrayWriteV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/add_1/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/add_1/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_1&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_2&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_3&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/NextIteration_4&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_1&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_2&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_4&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Switch_4&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/TensorArraySizeV3&quot;\\n  op: &quot;TensorArraySizeV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/range/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/TensorArraySizeV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/TensorArrayGatherV3&quot;\\n  op: &quot;TensorArrayGatherV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/range&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 32\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/Rank_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/range_1/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/range_1/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/range_1&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/range_1/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/Rank_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/range_1/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat_2/values_0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\001\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat_2/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/concat_2&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat_2/values_0&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/range_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat_2/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/fw/transpose_1&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayStack/TensorArrayGatherV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/ReverseV2/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/ReverseV2&quot;\\n  op: &quot;ReverseV2&quot;\\n  input: &quot;embedding_lookup&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/ReverseV2/axis&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Rank&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/range/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Rank&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat/values_0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\001\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat/values_0&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/range&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/ReverseV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/Const&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_1/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_1&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_1/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_2/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_2&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_2/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/Const_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat_1/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat_1&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/Const_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat_1/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_1&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/concat_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_1/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_3/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_3&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/ExpandDims_3/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/Const_3&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Shape_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Shape_2&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Shape_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat_1/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat_1&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/ExpandDims&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Const&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat_1/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/time&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n  op: &quot;TensorArrayV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;clear_after_read&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;dynamic_size&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 32\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;identical_element_shapes&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;tensor_array_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/dynamic_rnn/output_0&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray_1&quot;\\n  op: &quot;TensorArrayV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;clear_after_read&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;dynamic_size&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;identical_element_shapes&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;tensor_array_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/dynamic_rnn/input_0&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice/stack&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice/stack_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/range/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/strided_slice&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  op: &quot;TensorArrayScatterV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/range&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/transpose&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray_1:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/transpose&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Maximum/x&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Maximum&quot;\\n  op: &quot;Maximum&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Maximum/x&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Minimum&quot;\\n  op: &quot;Minimum&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Maximum&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/iteration_counter&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/iteration_counter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/time&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_2&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_3&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_4&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_1&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_2&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_3&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_3&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_4&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Enter_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_4&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Less&quot;\\n  op: &quot;Less&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Less/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Less/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/strided_slice_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Less_1&quot;\\n  op: &quot;Less&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Less_1/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Less_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Minimum&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/LogicalAnd&quot;\\n  op: &quot;LogicalAnd&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Less&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Less_1&quot;\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/LoopCond&quot;\\n  op: &quot;LoopCond&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/LogicalAnd&quot;\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Merge&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_1&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Merge_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_2&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Merge_2&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_3&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Merge_3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_4&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Merge_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Merge_4&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_1:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_2&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_2:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_3&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_3:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_4&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_4:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/add/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3&quot;\\n  op: &quot;TensorArrayReadV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;*\\\\000\\\\000\\\\000\\\\200\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: -0.18786728382110596\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/max&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.18786728382110596\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  op: &quot;RandomUniform&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;seed&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;seed2&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/sub&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/max&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/sub&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 42\\n        }\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/Initializer/random_uniform&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 128\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd&quot;\\n  op: &quot;BiasAdd&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 4\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split/split_dim&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split&quot;\\n  op: &quot;Split&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split/split_dim&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;num_split&quot;\\n    value {\\n      i: 4\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split:2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh&quot;\\n  op: &quot;Tanh&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split:3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1&quot;\\n  op: &quot;Tanh&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  op: &quot;TensorArrayWriteV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/add_1/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/add_1/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_1&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_2&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_3&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/NextIteration_4&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_1&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_2&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_4&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Switch_4&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/TensorArraySizeV3&quot;\\n  op: &quot;TensorArraySizeV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/range/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/TensorArraySizeV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/TensorArrayGatherV3&quot;\\n  op: &quot;TensorArrayGatherV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/range&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 32\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 32\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/Rank_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/range_1/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/range_1/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/range_1&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/range_1/start&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/Rank_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/range_1/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat_2/values_0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\001\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat_2/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/concat_2&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat_2/values_0&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/range_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat_2/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/bw/transpose_1&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayStack/TensorArrayGatherV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/ReverseV2/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/ReverseV2&quot;\\n  op: &quot;ReverseV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/transpose_1&quot;\\n  input: &quot;encoding/ReverseV2/axis&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_3&quot;\\n  input: &quot;encoding/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/concat_1/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/concat_1&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_4&quot;\\n  input: &quot;encoding/concat_1/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Rank&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;decoding/rnn/range/start&quot;\\n  input: &quot;decoding/rnn/Rank&quot;\\n  input: &quot;decoding/rnn/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat/values_0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\001\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;decoding/rnn/concat/values_0&quot;\\n  input: &quot;decoding/rnn/range&quot;\\n  input: &quot;decoding/rnn/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;embedding_lookup_1&quot;\\n  input: &quot;decoding/rnn/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;decoding/rnn/Shape&quot;\\n  input: &quot;decoding/rnn/strided_slice/stack&quot;\\n  input: &quot;decoding/rnn/strided_slice/stack_1&quot;\\n  input: &quot;decoding/rnn/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_1/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_1/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_1/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_1&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;decoding/rnn/Shape_1&quot;\\n  input: &quot;decoding/rnn/strided_slice_1/stack&quot;\\n  input: &quot;decoding/rnn/strided_slice_1/stack_1&quot;\\n  input: &quot;decoding/rnn/strided_slice_1/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Shape_2&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_2/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_2/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_2/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/strided_slice_2&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;decoding/rnn/Shape_2&quot;\\n  input: &quot;decoding/rnn/strided_slice_2/stack&quot;\\n  input: &quot;decoding/rnn/strided_slice_2/stack_1&quot;\\n  input: &quot;decoding/rnn/strided_slice_2/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;decoding/rnn/strided_slice_2&quot;\\n  input: &quot;decoding/rnn/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 64\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat_1/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat_1&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;decoding/rnn/ExpandDims&quot;\\n  input: &quot;decoding/rnn/Const&quot;\\n  input: &quot;decoding/rnn/concat_1/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;decoding/rnn/concat_1&quot;\\n  input: &quot;decoding/rnn/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/time&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArray&quot;\\n  op: &quot;TensorArrayV3&quot;\\n  input: &quot;decoding/rnn/strided_slice_1&quot;\\n  attr {\\n    key: &quot;clear_after_read&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;dynamic_size&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 64\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;identical_element_shapes&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;tensor_array_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/dynamic_rnn/output_0&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArray_1&quot;\\n  op: &quot;TensorArrayV3&quot;\\n  input: &quot;decoding/rnn/strided_slice_1&quot;\\n  attr {\\n    key: &quot;clear_after_read&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;dynamic_size&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;identical_element_shapes&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;tensor_array_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/dynamic_rnn/input_0&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/Shape&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/strided_slice/stack&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/strided_slice/stack_1&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/range/start&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/strided_slice&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  op: &quot;TensorArrayScatterV3&quot;\\n  input: &quot;decoding/rnn/TensorArray_1&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/range&quot;\\n  input: &quot;decoding/rnn/transpose&quot;\\n  input: &quot;decoding/rnn/TensorArray_1:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/transpose&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Maximum/x&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Maximum&quot;\\n  op: &quot;Maximum&quot;\\n  input: &quot;decoding/rnn/Maximum/x&quot;\\n  input: &quot;decoding/rnn/strided_slice_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Minimum&quot;\\n  op: &quot;Minimum&quot;\\n  input: &quot;decoding/rnn/strided_slice_1&quot;\\n  input: &quot;decoding/rnn/Maximum&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/iteration_counter&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/while/iteration_counter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/time&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Enter_2&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/TensorArray:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Enter_3&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Enter_4&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/concat_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Merge&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;decoding/rnn/while/Enter&quot;\\n  input: &quot;decoding/rnn/while/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Merge_1&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;decoding/rnn/while/Enter_1&quot;\\n  input: &quot;decoding/rnn/while/NextIteration_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Merge_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;decoding/rnn/while/Enter_2&quot;\\n  input: &quot;decoding/rnn/while/NextIteration_2&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Merge_3&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;decoding/rnn/while/Enter_3&quot;\\n  input: &quot;decoding/rnn/while/NextIteration_3&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Merge_4&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;decoding/rnn/while/Enter_4&quot;\\n  input: &quot;decoding/rnn/while/NextIteration_4&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Less&quot;\\n  op: &quot;Less&quot;\\n  input: &quot;decoding/rnn/while/Merge&quot;\\n  input: &quot;decoding/rnn/while/Less/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Less/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/strided_slice_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Less_1&quot;\\n  op: &quot;Less&quot;\\n  input: &quot;decoding/rnn/while/Merge_1&quot;\\n  input: &quot;decoding/rnn/while/Less_1/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Less_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/Minimum&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/LogicalAnd&quot;\\n  op: &quot;LogicalAnd&quot;\\n  input: &quot;decoding/rnn/while/Less&quot;\\n  input: &quot;decoding/rnn/while/Less_1&quot;\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/LoopCond&quot;\\n  op: &quot;LoopCond&quot;\\n  input: &quot;decoding/rnn/while/LogicalAnd&quot;\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;decoding/rnn/while/Merge&quot;\\n  input: &quot;decoding/rnn/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Merge&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Switch_1&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;decoding/rnn/while/Merge_1&quot;\\n  input: &quot;decoding/rnn/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Merge_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Switch_2&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;decoding/rnn/while/Merge_2&quot;\\n  input: &quot;decoding/rnn/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Merge_2&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Switch_3&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;decoding/rnn/while/Merge_3&quot;\\n  input: &quot;decoding/rnn/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Merge_3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Switch_4&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;decoding/rnn/while/Merge_4&quot;\\n  input: &quot;decoding/rnn/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Merge_4&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Identity&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/while/Switch:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Identity_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/while/Switch_1:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Identity_2&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/while/Switch_2:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Identity_3&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/while/Switch_3:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Identity_4&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/while/Switch_4:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/add/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^decoding/rnn/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;decoding/rnn/while/Identity&quot;\\n  input: &quot;decoding/rnn/while/add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/TensorArrayReadV3&quot;\\n  op: &quot;TensorArrayReadV3&quot;\\n  input: &quot;decoding/rnn/while/TensorArrayReadV3/Enter&quot;\\n  input: &quot;decoding/rnn/while/Identity_1&quot;\\n  input: &quot;decoding/rnn/while/TensorArrayReadV3/Enter_1&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/TensorArrayReadV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/TensorArray_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/TensorArrayReadV3/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;J\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: -0.1348399668931961\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/max&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.1348399668931961\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  op: &quot;RandomUniform&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;seed&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;seed2&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/sub&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/max&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/sub&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/mul&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 74\\n        }\\n        dim {\\n          size: 256\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/Initializer/random_uniform&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 256\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 256\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^decoding/rnn/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;decoding/rnn/while/TensorArrayReadV3&quot;\\n  input: &quot;decoding/rnn/while/Identity_4&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/concat&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/MatMul/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/MatMul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/BiasAdd&quot;\\n  op: &quot;BiasAdd&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/MatMul&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/BiasAdd/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/BiasAdd/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^decoding/rnn/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 4\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/split/split_dim&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^decoding/rnn/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/split&quot;\\n  op: &quot;Split&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/split/split_dim&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/BiasAdd&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;num_split&quot;\\n    value {\\n      i: 4\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/add/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^decoding/rnn/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/split:2&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/Sigmoid&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid&quot;\\n  input: &quot;decoding/rnn/while/Identity_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/Sigmoid_1&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/split&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/Tanh&quot;\\n  op: &quot;Tanh&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/split:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid_1&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Tanh&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/mul&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/mul_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/Sigmoid_2&quot;\\n  op: &quot;Sigmoid&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/split:3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/Tanh_1&quot;\\n  op: &quot;Tanh&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/lstm_cell/mul_2&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid_2&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Tanh_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  op: &quot;TensorArrayWriteV3&quot;\\n  input: &quot;decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  input: &quot;decoding/rnn/while/Identity_1&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/mul_2&quot;\\n  input: &quot;decoding/rnn/while/Identity_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/TensorArray&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/add_1/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^decoding/rnn/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;decoding/rnn/while/Identity_1&quot;\\n  input: &quot;decoding/rnn/while/add_1/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;decoding/rnn/while/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/NextIteration_1&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;decoding/rnn/while/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/NextIteration_2&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/NextIteration_3&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/NextIteration_4&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/mul_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;decoding/rnn/while/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Exit_1&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;decoding/rnn/while/Switch_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Exit_2&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;decoding/rnn/while/Switch_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Exit_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;decoding/rnn/while/Switch_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/while/Exit_4&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;decoding/rnn/while/Switch_4&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayStack/TensorArraySizeV3&quot;\\n  op: &quot;TensorArraySizeV3&quot;\\n  input: &quot;decoding/rnn/TensorArray&quot;\\n  input: &quot;decoding/rnn/while/Exit_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayStack/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayStack/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayStack/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;decoding/rnn/TensorArrayStack/range/start&quot;\\n  input: &quot;decoding/rnn/TensorArrayStack/TensorArraySizeV3&quot;\\n  input: &quot;decoding/rnn/TensorArrayStack/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/TensorArrayStack/TensorArrayGatherV3&quot;\\n  op: &quot;TensorArrayGatherV3&quot;\\n  input: &quot;decoding/rnn/TensorArray&quot;\\n  input: &quot;decoding/rnn/TensorArrayStack/range&quot;\\n  input: &quot;decoding/rnn/while/Exit_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: -1\\n        }\\n        dim {\\n          size: 64\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 64\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/Rank_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/range_1/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/range_1/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/range_1&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;decoding/rnn/range_1/start&quot;\\n  input: &quot;decoding/rnn/Rank_1&quot;\\n  input: &quot;decoding/rnn/range_1/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat_2/values_0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\001\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat_2/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/concat_2&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;decoding/rnn/concat_2/values_0&quot;\\n  input: &quot;decoding/rnn/range_1&quot;\\n  input: &quot;decoding/rnn/concat_2/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/transpose_1&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;decoding/rnn/TensorArrayStack/TensorArrayGatherV3&quot;\\n  input: &quot;decoding/rnn/concat_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Initializer/random_uniform/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;@\\\\000\\\\000\\\\000\\\\r\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Initializer/random_uniform/min&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: -0.2791452705860138\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Initializer/random_uniform/max&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.2791452705860138\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  op: &quot;RandomUniform&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;seed&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;seed2&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Initializer/random_uniform/sub&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform/max&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Initializer/random_uniform/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform/RandomUniform&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform/sub&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Initializer/random_uniform&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform/mul&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform/min&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 64\\n        }\\n        dim {\\n          size: 13\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dense/kernel&quot;\\n  input: &quot;dense/kernel/Initializer/random_uniform&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dense/kernel&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 13\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;dense/bias/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;dense/bias/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 13\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dense/bias&quot;\\n  input: &quot;dense/bias/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dense/bias&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/transpose_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Rank&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/axes&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/GreaterEqual/y&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/GreaterEqual&quot;\\n  op: &quot;GreaterEqual&quot;\\n  input: &quot;dense/Tensordot/axes&quot;\\n  input: &quot;dense/Tensordot/GreaterEqual/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Cast&quot;\\n  op: &quot;Cast&quot;\\n  input: &quot;dense/Tensordot/GreaterEqual&quot;\\n  attr {\\n    key: &quot;DstT&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;SrcT&quot;\\n    value {\\n      type: DT_BOOL\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;dense/Tensordot/Cast&quot;\\n  input: &quot;dense/Tensordot/axes&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Less/y&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Less&quot;\\n  op: &quot;Less&quot;\\n  input: &quot;dense/Tensordot/axes&quot;\\n  input: &quot;dense/Tensordot/Less/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Cast_1&quot;\\n  op: &quot;Cast&quot;\\n  input: &quot;dense/Tensordot/Less&quot;\\n  attr {\\n    key: &quot;DstT&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;SrcT&quot;\\n    value {\\n      type: DT_BOOL\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;dense/Tensordot/axes&quot;\\n  input: &quot;dense/Tensordot/Rank&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;dense/Tensordot/Cast_1&quot;\\n  input: &quot;dense/Tensordot/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;dense/Tensordot/mul&quot;\\n  input: &quot;dense/Tensordot/mul_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/range/start&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/range/delta&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/range&quot;\\n  op: &quot;Range&quot;\\n  input: &quot;dense/Tensordot/range/start&quot;\\n  input: &quot;dense/Tensordot/Rank&quot;\\n  input: &quot;dense/Tensordot/range/delta&quot;\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/ListDiff&quot;\\n  op: &quot;ListDiff&quot;\\n  input: &quot;dense/Tensordot/range&quot;\\n  input: &quot;dense/Tensordot/add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;out_idx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Gather&quot;\\n  op: &quot;Gather&quot;\\n  input: &quot;dense/Tensordot/Shape&quot;\\n  input: &quot;dense/Tensordot/ListDiff&quot;\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tparams&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_indices&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Gather_1&quot;\\n  op: &quot;Gather&quot;\\n  input: &quot;dense/Tensordot/Shape&quot;\\n  input: &quot;dense/Tensordot/add_1&quot;\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tparams&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_indices&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Prod&quot;\\n  op: &quot;Prod&quot;\\n  input: &quot;dense/Tensordot/Gather&quot;\\n  input: &quot;dense/Tensordot/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Prod_1&quot;\\n  op: &quot;Prod&quot;\\n  input: &quot;dense/Tensordot/Gather_1&quot;\\n  input: &quot;dense/Tensordot/Const_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;dense/Tensordot/Gather_1&quot;\\n  input: &quot;dense/Tensordot/Gather&quot;\\n  input: &quot;dense/Tensordot/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/concat_1/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/concat_1&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;dense/Tensordot/ListDiff&quot;\\n  input: &quot;dense/Tensordot/add_1&quot;\\n  input: &quot;dense/Tensordot/concat_1/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/stack&quot;\\n  op: &quot;Pack&quot;\\n  input: &quot;dense/Tensordot/Prod&quot;\\n  input: &quot;dense/Tensordot/Prod_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;axis&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;decoding/rnn/transpose_1&quot;\\n  input: &quot;dense/Tensordot/concat_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;dense/Tensordot/transpose&quot;\\n  input: &quot;dense/Tensordot/stack&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/transpose_1/perm&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/transpose_1&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;dense/kernel/read&quot;\\n  input: &quot;dense/Tensordot/transpose_1/perm&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Reshape_1/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;@\\\\000\\\\000\\\\000\\\\r\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;dense/Tensordot/transpose_1&quot;\\n  input: &quot;dense/Tensordot/Reshape_1/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;dense/Tensordot/Reshape&quot;\\n  input: &quot;dense/Tensordot/Reshape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/Const_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 13\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/concat_2/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot/concat_2&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;dense/Tensordot/Gather&quot;\\n  input: &quot;dense/Tensordot/Const_2&quot;\\n  input: &quot;dense/Tensordot/concat_2/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/Tensordot&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;dense/Tensordot/MatMul&quot;\\n  input: &quot;dense/Tensordot/concat_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/BiasAdd&quot;\\n  op: &quot;BiasAdd&quot;\\n  input: &quot;dense/Tensordot&quot;\\n  input: &quot;dense/bias/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\200\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;optimization/ones/shape_as_tensor&quot;\\n  input: &quot;optimization/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;dense/BiasAdd&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 3\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;optimization/sequence_loss/Shape&quot;\\n  input: &quot;optimization/sequence_loss/strided_slice/stack&quot;\\n  input: &quot;optimization/sequence_loss/strided_slice/stack_1&quot;\\n  input: &quot;optimization/sequence_loss/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Reshape/shape/0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Reshape/shape&quot;\\n  op: &quot;Pack&quot;\\n  input: &quot;optimization/sequence_loss/Reshape/shape/0&quot;\\n  input: &quot;optimization/sequence_loss/strided_slice&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;axis&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;dense/BiasAdd&quot;\\n  input: &quot;optimization/sequence_loss/Reshape/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Reshape_1/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;targets&quot;\\n  input: &quot;optimization/sequence_loss/Reshape_1/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;optimization/sequence_loss/Reshape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits&quot;\\n  op: &quot;SparseSoftmaxCrossEntropyWithLogits&quot;\\n  input: &quot;optimization/sequence_loss/Reshape&quot;\\n  input: &quot;optimization/sequence_loss/Reshape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tlabels&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Reshape_2/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Reshape_2&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/ones&quot;\\n  input: &quot;optimization/sequence_loss/Reshape_2/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits&quot;\\n  input: &quot;optimization/sequence_loss/Reshape_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/sequence_loss/mul&quot;\\n  input: &quot;optimization/sequence_loss/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/ones&quot;\\n  input: &quot;optimization/sequence_loss/Const_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/add/y&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 9.999999960041972e-13\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/sequence_loss/Sum_1&quot;\\n  input: &quot;optimization/sequence_loss/add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/sequence_loss/truediv&quot;\\n  op: &quot;RealDiv&quot;\\n  input: &quot;optimization/sequence_loss/Sum&quot;\\n  input: &quot;optimization/sequence_loss/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/grad_ys_0&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Fill&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;optimization/gradients/Shape&quot;\\n  input: &quot;optimization/gradients/grad_ys_0&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/f_count&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Merge&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/f_count_1&quot;\\n  input: &quot;optimization/gradients/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/Merge&quot;\\n  input: &quot;decoding/rnn/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Add/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^decoding/rnn/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/Switch:1&quot;\\n  input: &quot;optimization/gradients/Add/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/Add&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPushV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_2&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/f_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Merge_1&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/b_count_1&quot;\\n  input: &quot;optimization/gradients/NextIteration_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/GreaterEqual&quot;\\n  op: &quot;GreaterEqual&quot;\\n  input: &quot;optimization/gradients/Merge_1&quot;\\n  input: &quot;optimization/gradients/GreaterEqual/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/GreaterEqual/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/b_count&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_2&quot;\\n  op: &quot;LoopCond&quot;\\n  input: &quot;optimization/gradients/GreaterEqual&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/Switch_1&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/Merge_1&quot;\\n  input: &quot;optimization/gradients/b_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Sub&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;optimization/gradients/Switch_1:1&quot;\\n  input: &quot;optimization/gradients/GreaterEqual/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/NextIteration_1&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/Sub&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/b_sync&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/Switch_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_3&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_4&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/f_count_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Merge_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/f_count_4&quot;\\n  input: &quot;optimization/gradients/NextIteration_2&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Switch_2&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/Merge_2&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Add_1/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/fw/fw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Add_1&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/Switch_2:1&quot;\\n  input: &quot;optimization/gradients/Add_1/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/NextIteration_2&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/Add_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPushV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_5&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/Switch_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_4&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_5&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/f_count_5&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Merge_3&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/b_count_5&quot;\\n  input: &quot;optimization/gradients/NextIteration_3&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/GreaterEqual_1&quot;\\n  op: &quot;GreaterEqual&quot;\\n  input: &quot;optimization/gradients/Merge_3&quot;\\n  input: &quot;optimization/gradients/GreaterEqual_1/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/GreaterEqual_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/b_count_4&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_6&quot;\\n  op: &quot;LoopCond&quot;\\n  input: &quot;optimization/gradients/GreaterEqual_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/Switch_3&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/Merge_3&quot;\\n  input: &quot;optimization/gradients/b_count_6&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Sub_1&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;optimization/gradients/Switch_3:1&quot;\\n  input: &quot;optimization/gradients/GreaterEqual_1/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/NextIteration_3&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/Sub_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/b_sync&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_7&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/Switch_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_6&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_7&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/f_count_6&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Merge_4&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/f_count_7&quot;\\n  input: &quot;optimization/gradients/NextIteration_4&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Switch_4&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/Merge_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/LoopCond&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Add_2/y&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^encoding/bidirectional_rnn/bw/bw/while/Identity&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Add_2&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/Switch_4:1&quot;\\n  input: &quot;optimization/gradients/Add_2/y&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/NextIteration_4&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/Add_2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPushV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPushV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPushV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/f_count_8&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/Switch_4&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_8&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_9&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/f_count_8&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Merge_5&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/b_count_9&quot;\\n  input: &quot;optimization/gradients/NextIteration_5&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/GreaterEqual_2&quot;\\n  op: &quot;GreaterEqual&quot;\\n  input: &quot;optimization/gradients/Merge_5&quot;\\n  input: &quot;optimization/gradients/GreaterEqual_2/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/GreaterEqual_2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/b_count_8&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_10&quot;\\n  op: &quot;LoopCond&quot;\\n  input: &quot;optimization/gradients/GreaterEqual_2&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/Switch_5&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/Merge_5&quot;\\n  input: &quot;optimization/gradients/b_count_10&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/Sub_2&quot;\\n  op: &quot;Sub&quot;\\n  input: &quot;optimization/gradients/Switch_5:1&quot;\\n  input: &quot;optimization/gradients/GreaterEqual_2/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/NextIteration_5&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/Sub_2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/b_sync&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/b_count_11&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/Switch_5&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Shape_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Shape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/RealDiv&quot;\\n  op: &quot;RealDiv&quot;\\n  input: &quot;optimization/gradients/Fill&quot;\\n  input: &quot;optimization/sequence_loss/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/RealDiv&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Sum&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Neg&quot;\\n  op: &quot;Neg&quot;\\n  input: &quot;optimization/sequence_loss/Sum&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/RealDiv_1&quot;\\n  op: &quot;RealDiv&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Neg&quot;\\n  input: &quot;optimization/sequence_loss/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/RealDiv_2&quot;\\n  op: &quot;RealDiv&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/RealDiv_1&quot;\\n  input: &quot;optimization/sequence_loss/add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/Fill&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/RealDiv_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/mul&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/truediv_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/truediv_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/optimization/sequence_loss/truediv_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Reshape/shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/truediv_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Reshape/shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Tile/multiples&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1280\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Tile&quot;\\n  op: &quot;Tile&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Reshape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Tile/multiples&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tmultiples&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Shape_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1280\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Shape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Tile&quot;\\n  input: &quot;optimization/sequence_loss/Reshape_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/mul&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Sum&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Sum_grad/Tile&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/mul_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/optimization/sequence_loss/mul_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/optimization/sequence_loss/mul_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros_like&quot;\\n  op: &quot;ZerosLike&quot;\\n  input: &quot;optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/PreventGradient&quot;\\n  op: &quot;PreventGradient&quot;\\n  input: &quot;optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;message&quot;\\n    value {\\n      s: &quot;Currently there is no way to take the second derivative of sparse_softmax_cross_entropy_with_logits due to the fused implementation\\\\\\'s interaction with tf.gradients()&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/mul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/ExpandDims&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/PreventGradient&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/Reshape_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;dense/BiasAdd&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/optimization/sequence_loss/Reshape_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits_grad/mul&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Reshape_grad/Shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/BiasAdd_grad/BiasAddGrad&quot;\\n  op: &quot;BiasAddGrad&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Reshape_grad/Reshape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/BiasAdd_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/optimization/sequence_loss/Reshape_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/dense/BiasAdd_grad/BiasAddGrad&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/BiasAdd_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/optimization/sequence_loss/Reshape_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/dense/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/optimization/sequence_loss/Reshape_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/dense/BiasAdd_grad/BiasAddGrad&quot;\\n  input: &quot;^optimization/gradients/dense/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/dense/BiasAdd_grad/BiasAddGrad&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;dense/Tensordot/MatMul&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/dense/BiasAdd_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot_grad/Shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot_grad/Reshape&quot;\\n  input: &quot;dense/Tensordot/Reshape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/MatMul_1&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;dense/Tensordot/Reshape&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot_grad/Reshape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/dense/Tensordot/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/dense/Tensordot/MatMul_grad/MatMul_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/dense/Tensordot/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/dense/Tensordot/MatMul_grad/MatMul&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/MatMul_1&quot;\\n  input: &quot;^optimization/gradients/dense/Tensordot/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/dense/Tensordot/MatMul_grad/MatMul_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/Reshape_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;dense/Tensordot/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/Reshape_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/Reshape_grad/Shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/Reshape_1_grad/Shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;@\\\\000\\\\000\\\\000\\\\r\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/Reshape_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/MatMul_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/Reshape_1_grad/Shape&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/transpose_grad/InvertPermutation&quot;\\n  op: &quot;InvertPermutation&quot;\\n  input: &quot;dense/Tensordot/concat_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/transpose_grad/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/Reshape_grad/Reshape&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/transpose_grad/InvertPermutation&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/transpose_1_grad/InvertPermutation&quot;\\n  op: &quot;InvertPermutation&quot;\\n  input: &quot;dense/Tensordot/transpose_1/perm&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/dense/Tensordot/transpose_1_grad/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/Reshape_1_grad/Reshape&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/transpose_1_grad/InvertPermutation&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/transpose_1_grad/InvertPermutation&quot;\\n  op: &quot;InvertPermutation&quot;\\n  input: &quot;decoding/rnn/concat_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/transpose_1_grad/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/transpose_grad/transpose&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/transpose_1_grad/InvertPermutation&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;decoding/rnn/TensorArray&quot;\\n  input: &quot;decoding/rnn/while/Exit_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/while/Exit_2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  op: &quot;TensorArrayScatterV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;decoding/rnn/TensorArrayStack/range&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/transpose_1_grad/transpose&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros_like_1&quot;\\n  op: &quot;ZerosLike&quot;\\n  input: &quot;decoding/rnn/while/Exit_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros_like_2&quot;\\n  op: &quot;ZerosLike&quot;\\n  input: &quot;decoding/rnn/while/Exit_4&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Exit_2_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Exit_3_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/zeros_like_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Exit_4_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/zeros_like_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Switch_2_grad/b_switch&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Exit_2_grad/b_exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Switch_2_grad_1/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Switch_3_grad/b_switch&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Exit_3_grad/b_exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Switch_3_grad_1/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Switch_4_grad/b_switch&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Exit_4_grad/b_exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Switch_4_grad_1/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Switch_2_grad/b_switch&quot;\\n  input: &quot;optimization/gradients/b_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_2_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_2_grad/Switch&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/Switch&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_2_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/Switch:1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_2_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Switch_3_grad/b_switch&quot;\\n  input: &quot;optimization/gradients/b_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_3_grad/Switch&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/Switch&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/Switch:1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Switch_4_grad/b_switch&quot;\\n  input: &quot;optimization/gradients/b_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_4_grad/Switch&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/Switch&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_4_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/Switch:1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_4_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Enter_2_grad/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Enter_3_grad/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Enter_4_grad/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/TensorArray&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/control_dependency_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/mul_2&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3&quot;\\n  op: &quot;TensorArrayReadV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Identity_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Identity_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/Enter&quot;\\n  input: &quot;decoding/rnn/while/Identity_1&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/b_sync&quot;\\n  op: &quot;ControlTrigger&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/control_dependency_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_2_grad/tuple/control_dependency_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_2_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/Rank&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/mod&quot;\\n  op: &quot;FloorMod&quot;\\n  input: &quot;encoding/concat/axis&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/Rank&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/ShapeN&quot;\\n  op: &quot;ShapeN&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_3&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/ConcatOffset&quot;\\n  op: &quot;ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/mod&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/ShapeN&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/Slice&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Enter_3_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/ShapeN&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/Slice_1&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Enter_3_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/ConcatOffset:1&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_grad/Slice_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/concat_grad/Slice&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/Slice_1&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/concat_grad/Slice_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/Rank&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/mod&quot;\\n  op: &quot;FloorMod&quot;\\n  input: &quot;encoding/concat_1/axis&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/Rank&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_4&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/ShapeN&quot;\\n  op: &quot;ShapeN&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Exit_4&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Exit_4&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/ConcatOffset&quot;\\n  op: &quot;ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/mod&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/ShapeN&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/Slice&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Enter_4_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/ShapeN&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/Slice_1&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Enter_4_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/ConcatOffset:1&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_1_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_1_grad/Slice_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/concat_1_grad/Slice&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/concat_1_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/Slice_1&quot;\\n  input: &quot;^optimization/gradients/encoding/concat_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/concat_1_grad/Slice_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;optimization/gradients/zeros/shape_as_tensor&quot;\\n  input: &quot;optimization/gradients/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros_1/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/zeros_1&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;optimization/gradients/zeros_1/shape_as_tensor&quot;\\n  input: &quot;optimization/gradients/zeros_1/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/AddN&quot;\\n  op: &quot;AddN&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Tanh_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/AddN&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Tanh_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Tanh_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/Enter&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Tanh_1&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/AddN&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Sigmoid_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Sigmoid_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/Enter&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid_2&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Exit_3_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Exit_4_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Exit_2_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Exit_3_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/concat_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Exit_4_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/concat_1_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Exit_2_grad/b_exit&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/zeros_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Sigmoid_2_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Tanh_1_grad/TanhGrad&quot;\\n  op: &quot;TanhGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_2_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Switch_2_grad_1/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad/b_switch&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Exit_3_grad/b_exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad_1/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_4_grad/b_switch&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Exit_4_grad/b_exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_4_grad_1/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad/b_switch&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Exit_3_grad/b_exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad_1/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_4_grad/b_switch&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Exit_4_grad/b_exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_4_grad_1/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/AddN_1&quot;\\n  op: &quot;AddN&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/Merge_3_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Tanh_1_grad/TanhGrad&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/mul&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/mul_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/AddN_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/AddN_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad/b_switch&quot;\\n  input: &quot;optimization/gradients/b_count_6&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/Switch&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/Switch&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/Switch:1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_4_grad/b_switch&quot;\\n  input: &quot;optimization/gradients/b_count_6&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/Switch&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/Switch&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/Switch:1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad/b_switch&quot;\\n  input: &quot;optimization/gradients/b_count_10&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/Switch&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/Switch&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/Switch:1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_4_grad/b_switch&quot;\\n  input: &quot;optimization/gradients/b_count_10&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/Switch&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/Switch&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/Switch:1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_4_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/Identity_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Identity_3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Identity_3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/Enter&quot;\\n  input: &quot;decoding/rnn/while/Identity_3&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Sigmoid&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Sigmoid&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/Enter&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Tanh&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Tanh&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Tanh&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/Enter&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Tanh&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Sigmoid_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/Sigmoid_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/Enter&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/Sigmoid_1&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Enter_3_grad/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Enter_4_grad/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Enter_3_grad/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Enter_4_grad/Exit&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Sigmoid_1_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Tanh_grad/TanhGrad&quot;\\n  op: &quot;TanhGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_1_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_grad/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Enter_3_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_grad/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_1_grad/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Enter_4_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/LSTMCellZeroState/zeros_1_grad/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/b_sync&quot;\\n  op: &quot;ControlTrigger&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_grad/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Enter_3_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_grad/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_1_grad/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Enter_4_grad/Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/LSTMCellZeroState/zeros_1_grad/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/b_sync&quot;\\n  op: &quot;ControlTrigger&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_4_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/split:2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Shape_1&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Switch_3_grad_1/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/mul_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1_grad/TanhGrad&quot;\\n  op: &quot;TanhGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_2_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1_grad/TanhGrad&quot;\\n  op: &quot;TanhGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/mul/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_2_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/split_grad/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Sigmoid_1_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Tanh_grad/TanhGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/add_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/Sigmoid_2_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/split_grad/concat/Const&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 4\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/split_grad/concat/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/AddN_2&quot;\\n  op: &quot;AddN&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Merge_3_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_1_grad/TanhGrad&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/AddN_2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/AddN_2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/AddN_3&quot;\\n  op: &quot;AddN&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Merge_3_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_1_grad/TanhGrad&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad/b_switch&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/AddN_3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/AddN_3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n  op: &quot;BiasAddGrad&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/split_grad/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/split_grad/concat&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/split_grad/concat&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/split_grad/concat&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Identity_3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Identity_3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_3&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Identity_3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Identity_3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_3&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/Enter_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Shape_1&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1&quot;\\n  op: &quot;Mul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_1_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/BroadcastGradientArgs/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/concat&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/lstm_cell/concat&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/Enter&quot;\\n  input: &quot;decoding/rnn/while/lstm_cell/concat&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n          dim {\\n            size: 256\\n          }\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_grad/TanhGrad&quot;\\n  op: &quot;TanhGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_1_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1_grad/SigmoidGrad&quot;\\n  op: &quot;SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_grad/TanhGrad&quot;\\n  op: &quot;TanhGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/mul/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_1_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Rank&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/mod&quot;\\n  op: &quot;FloorMod&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/mod/Const&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Rank&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/mod/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;decoding/rnn/while/TensorArrayReadV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN&quot;\\n  op: &quot;ShapeN&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/TensorArrayReadV3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/TensorArrayReadV3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Enter&quot;\\n  input: &quot;decoding/rnn/while/TensorArrayReadV3&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Identity_4&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/Identity_4&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/Enter_1&quot;\\n  input: &quot;decoding/rnn/while/Identity_4&quot;\\n  input: &quot;^optimization/gradients/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ConcatOffset&quot;\\n  op: &quot;ConcatOffset&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/mod&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ConcatOffset&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice_1&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ConcatOffset:1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/Slice_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n          dim {\\n            size: 74\\n          }\\n          dim {\\n            size: 256\\n          }\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split:2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Shape_1&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_3_grad_1/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/mul_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split:2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Shape_1&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n          }\\n        }\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs&quot;\\n  op: &quot;BroadcastGradientArgs&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Shape&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Shape&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Sum&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs/StackPopV2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Sum_1&quot;\\n  op: &quot;Sum&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/BroadcastGradientArgs:1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;keep_dims&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Sum_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Shape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/Reshape_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_3_grad_1/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/mul_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  input: &quot;^optimization/gradients/Sub&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/TensorArray_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  op: &quot;TensorArrayWriteV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/StackPopV2&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split_grad/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_1_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Tanh_grad/TanhGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/add_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/Sigmoid_2_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split_grad/concat/Const&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 4\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split_grad/concat/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split_grad/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_1_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Tanh_grad/TanhGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/add_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/Sigmoid_2_grad/SigmoidGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split_grad/concat/Const&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 4\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split_grad/concat/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/decoding/rnn/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/while/Switch_4_grad_1/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/concat_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n  op: &quot;BiasAddGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split_grad/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split_grad/concat&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split_grad/concat&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/split_grad/concat&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n  op: &quot;BiasAddGrad&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split_grad/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;data_format&quot;\\n    value {\\n      s: &quot;NHWC&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split_grad/concat&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split_grad/concat&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/split_grad/concat&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/BiasAddGrad&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;decoding/rnn/TensorArray_1&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/TensorArray_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n  op: &quot;TensorArrayGatherV3&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;decoding/rnn/TensorArrayUnstack/range&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        unknown_rank: true\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/decoding/rnn/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n          dim {\\n            size: 128\\n          }\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_6&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul/Enter&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/read&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n  op: &quot;MatMul&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_a&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;transpose_b&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/MatMul_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n          dim {\\n            size: 128\\n          }\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_10&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Rank&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/mod&quot;\\n  op: &quot;FloorMod&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/mod/Const&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Rank&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/mod/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN&quot;\\n  op: &quot;ShapeN&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Identity_4&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Identity_4&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/Enter_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_4&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ConcatOffset&quot;\\n  op: &quot;ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/mod&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice_1&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ConcatOffset:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/Slice_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n          dim {\\n            size: 42\\n          }\\n          dim {\\n            size: 128\\n          }\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_6&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Rank&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 2\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/mod&quot;\\n  op: &quot;FloorMod&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/mod/Const&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Rank&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/mod/Const&quot;\\n  op: &quot;Const&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN&quot;\\n  op: &quot;ShapeN&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Const_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Identity_4&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Const_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Identity_4&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPushV2_1&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/Enter_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_4&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/StackPopV2_1/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN/f_acc_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ConcatOffset&quot;\\n  op: &quot;ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/mod&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ConcatOffset&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice_1&quot;\\n  op: &quot;Slice&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ConcatOffset:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/ShapeN:1&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice_1&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/Slice_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n          dim {\\n            size: 42\\n          }\\n          dim {\\n            size: 128\\n          }\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_10&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  op: &quot;TensorArrayWriteV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Identity_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/while/Identity_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/while/Identity_1&quot;\\n  input: &quot;^optimization/gradients/Add_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_1&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  op: &quot;TensorArrayWriteV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Identity_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: -1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/f_acc&quot;\\n  op: &quot;StackV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/while/Identity_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;stack_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPushV2&quot;\\n  op: &quot;StackPushV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/Enter&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/while/Identity_1&quot;\\n  input: &quot;^optimization/gradients/Add_2&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;swap_memory&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2&quot;\\n  op: &quot;StackPopV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2/Enter&quot;\\n  input: &quot;^optimization/gradients/Sub_2&quot;\\n  attr {\\n    key: &quot;elem_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/StackPopV2/Enter&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3/f_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_RESOURCE\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_6&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/Switch_4_grad_1/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/concat_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_1&quot;\\n  op: &quot;Enter&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;frame_name&quot;\\n    value {\\n      s: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/while_context&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;is_constant&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n  attr {\\n    key: &quot;parallel_iterations&quot;\\n    value {\\n      i: 32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_2&quot;\\n  op: &quot;Merge&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/NextIteration&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/Switch&quot;\\n  op: &quot;Switch&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_2&quot;\\n  input: &quot;optimization/gradients/b_count_10&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/Add&quot;\\n  op: &quot;Add&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/Switch:1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3_grad/TensorArrayWrite/TensorArrayWriteV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/Add&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  op: &quot;Exit&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/Switch&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/Switch_4_grad_1/NextIteration&quot;\\n  op: &quot;NextIteration&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/concat_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/transpose_grad/InvertPermutation&quot;\\n  op: &quot;InvertPermutation&quot;\\n  input: &quot;decoding/rnn/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/decoding/rnn/transpose_grad/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/transpose_grad/InvertPermutation&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArray_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/TensorArray_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/fw/TensorArray_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n  op: &quot;TensorArrayGatherV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/range&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        unknown_rank: true\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  op: &quot;TensorArrayGradV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArray_1&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/TensorArray_1&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;source&quot;\\n    value {\\n      s: &quot;optimization/gradients&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/bw/TensorArray_1&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n  op: &quot;TensorArrayGatherV3&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/TensorArrayGradV3&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/range&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGrad/gradient_flow&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;element_shape&quot;\\n    value {\\n      shape {\\n        unknown_rank: true\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/TensorArrayGatherV3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency_1&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n  input: &quot;^optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/group_deps&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/TensorArrayReadV3/Enter_1_grad/b_acc_3&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/Shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT64\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT64\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\r\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/ToInt32&quot;\\n  op: &quot;Cast&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/Shape&quot;\\n  attr {\\n    key: &quot;DstT&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;SrcT&quot;\\n    value {\\n      type: DT_INT64\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/Size&quot;\\n  op: &quot;Size&quot;\\n  input: &quot;output&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/Size&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/ToInt32&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice/stack&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice/stack_1&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/ExpandDims&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/strided_slice&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/transpose_grad/transpose&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_1_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;output&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/ExpandDims&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/transpose_grad/InvertPermutation&quot;\\n  op: &quot;InvertPermutation&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/fw/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/transpose_grad/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/transpose_grad/InvertPermutation&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/transpose_grad/InvertPermutation&quot;\\n  op: &quot;InvertPermutation&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/bw/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/transpose_grad/transpose&quot;\\n  op: &quot;Transpose&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_grad/tuple/control_dependency&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/transpose_grad/InvertPermutation&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tperm&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/ReverseV2_grad/ReverseV2&quot;\\n  op: &quot;ReverseV2&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/transpose_grad/transpose&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/ReverseV2/axis&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/AddN_4&quot;\\n  op: &quot;AddN&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/transpose_grad/transpose&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/ReverseV2_grad/ReverseV2&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@optimization/gradients/encoding/bidirectional_rnn/fw/fw/transpose_grad/transpose&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/Shape&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT64\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT64\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;<\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/ToInt32&quot;\\n  op: &quot;Cast&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/Shape&quot;\\n  attr {\\n    key: &quot;DstT&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;SrcT&quot;\\n    value {\\n      type: DT_INT64\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/Size&quot;\\n  op: &quot;Size&quot;\\n  input: &quot;inputs&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/ExpandDims/dim&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/ExpandDims&quot;\\n  op: &quot;ExpandDims&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/Size&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/ExpandDims/dim&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tdim&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/ToInt32&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/strided_slice/stack&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/strided_slice/stack_1&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/concat/axis&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/concat&quot;\\n  op: &quot;ConcatV2&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/ExpandDims&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/strided_slice&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/concat/axis&quot;\\n  attr {\\n    key: &quot;N&quot;\\n    value {\\n      i: 2\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tidx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/Reshape&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;optimization/gradients/AddN_4&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/concat&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/gradients/embedding_lookup_grad/Reshape_1&quot;\\n  op: &quot;Reshape&quot;\\n  input: &quot;inputs&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/ExpandDims&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tshape&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;<\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;enc_embedding/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;enc_embedding/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 60\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;enc_embedding/RMSProp&quot;\\n  input: &quot;enc_embedding/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;enc_embedding/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;<\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;enc_embedding/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;enc_embedding/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 60\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;enc_embedding/RMSProp_1&quot;\\n  input: &quot;enc_embedding/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;enc_embedding/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;enc_embedding/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\r\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;dec_embedding/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;dec_embedding/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 13\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dec_embedding/RMSProp&quot;\\n  input: &quot;dec_embedding/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dec_embedding/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;\\\\r\\\\000\\\\000\\\\000\\\\n\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;dec_embedding/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;dec_embedding/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 13\\n        }\\n        dim {\\n          size: 10\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dec_embedding/RMSProp_1&quot;\\n  input: &quot;dec_embedding/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dec_embedding/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dec_embedding/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;*\\\\000\\\\000\\\\000\\\\200\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 42\\n        }\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;*\\\\000\\\\000\\\\000\\\\200\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 42\\n        }\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 128\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 128\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;*\\\\000\\\\000\\\\000\\\\200\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 42\\n        }\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;*\\\\000\\\\000\\\\000\\\\200\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 42\\n        }\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 128\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 128\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 128\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;J\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 74\\n        }\\n        dim {\\n          size: 256\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;J\\\\000\\\\000\\\\000\\\\000\\\\001\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 74\\n        }\\n        dim {\\n          size: 256\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 256\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 256\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 256\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 256\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;@\\\\000\\\\000\\\\000\\\\r\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;dense/kernel/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;dense/kernel/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 64\\n        }\\n        dim {\\n          size: 13\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dense/kernel/RMSProp&quot;\\n  input: &quot;dense/kernel/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dense/kernel/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 2\\n          }\\n        }\\n        tensor_content: &quot;@\\\\000\\\\000\\\\000\\\\r\\\\000\\\\000\\\\000&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;dense/kernel/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;dense/kernel/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 64\\n        }\\n        dim {\\n          size: 13\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dense/kernel/RMSProp_1&quot;\\n  input: &quot;dense/kernel/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/kernel/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dense/kernel/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 13\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp/Initializer/ones/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp/Initializer/ones&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;dense/bias/RMSProp/Initializer/ones/shape_as_tensor&quot;\\n  input: &quot;dense/bias/RMSProp/Initializer/ones/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 13\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dense/bias/RMSProp&quot;\\n  input: &quot;dense/bias/RMSProp/Initializer/ones&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dense/bias/RMSProp&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 13\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp_1/Initializer/zeros&quot;\\n  op: &quot;Fill&quot;\\n  input: &quot;dense/bias/RMSProp_1/Initializer/zeros/shape_as_tensor&quot;\\n  input: &quot;dense/bias/RMSProp_1/Initializer/zeros/Const&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;index_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp_1&quot;\\n  op: &quot;VariableV2&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;container&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;shape&quot;\\n    value {\\n      shape {\\n        dim {\\n          size: 13\\n        }\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;shared_name&quot;\\n    value {\\n      s: &quot;&quot;\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp_1/Assign&quot;\\n  op: &quot;Assign&quot;\\n  input: &quot;dense/bias/RMSProp_1&quot;\\n  input: &quot;dense/bias/RMSProp_1/Initializer/zeros&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n  attr {\\n    key: &quot;validate_shape&quot;\\n    value {\\n      b: true\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;dense/bias/RMSProp_1/read&quot;\\n  op: &quot;Identity&quot;\\n  input: &quot;dense/bias/RMSProp_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/learning_rate&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0010000000474974513\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/decay&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.8999999761581421\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/momentum&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 0.0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/epsilon&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_FLOAT\\n        tensor_shape {\\n        }\\n        float_val: 1.000000013351432e-10\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/Unique&quot;\\n  op: &quot;Unique&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/Reshape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;out_idx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/Unique&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/Shape&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/strided_slice/stack&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/strided_slice/stack_1&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/UnsortedSegmentSum&quot;\\n  op: &quot;UnsortedSegmentSum&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_grad/Reshape&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/Unique:1&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/strided_slice&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tnumsegments&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_enc_embedding/SparseApplyRMSProp&quot;\\n  op: &quot;SparseApplyRMSProp&quot;\\n  input: &quot;enc_embedding&quot;\\n  input: &quot;enc_embedding/RMSProp&quot;\\n  input: &quot;enc_embedding/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/UnsortedSegmentSum&quot;\\n  input: &quot;optimization/RMSProp/update_enc_embedding/Unique&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@enc_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/Unique&quot;\\n  op: &quot;Unique&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/Reshape_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;out_idx&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/Shape&quot;\\n  op: &quot;Shape&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/Unique&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;out_type&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/strided_slice/stack&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 0\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/strided_slice/stack_1&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/strided_slice/stack_2&quot;\\n  op: &quot;Const&quot;\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;dtype&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;value&quot;\\n    value {\\n      tensor {\\n        dtype: DT_INT32\\n        tensor_shape {\\n          dim {\\n            size: 1\\n          }\\n        }\\n        int_val: 1\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/strided_slice&quot;\\n  op: &quot;StridedSlice&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/Shape&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/strided_slice/stack&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/strided_slice/stack_1&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/strided_slice/stack_2&quot;\\n  attr {\\n    key: &quot;Index&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;begin_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;ellipsis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;end_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;new_axis_mask&quot;\\n    value {\\n      i: 0\\n    }\\n  }\\n  attr {\\n    key: &quot;shrink_axis_mask&quot;\\n    value {\\n      i: 1\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/UnsortedSegmentSum&quot;\\n  op: &quot;UnsortedSegmentSum&quot;\\n  input: &quot;optimization/gradients/embedding_lookup_1_grad/Reshape&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/Unique:1&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/strided_slice&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;Tnumsegments&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dec_embedding/SparseApplyRMSProp&quot;\\n  op: &quot;SparseApplyRMSProp&quot;\\n  input: &quot;dec_embedding&quot;\\n  input: &quot;dec_embedding/RMSProp&quot;\\n  input: &quot;dec_embedding/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/UnsortedSegmentSum&quot;\\n  input: &quot;optimization/RMSProp/update_dec_embedding/Unique&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;Tindices&quot;\\n    value {\\n      type: DT_INT32\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dec_embedding&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_encoding/bidirectional_rnn/fw/lstm_cell/kernel/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/kernel/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/MatMul/Enter_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_encoding/bidirectional_rnn/fw/lstm_cell/bias/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/fw/lstm_cell/bias/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/fw/fw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/fw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_encoding/bidirectional_rnn/bw/lstm_cell/kernel/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/kernel/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/MatMul/Enter_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_encoding/bidirectional_rnn/bw/lstm_cell/bias/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp&quot;\\n  input: &quot;encoding/bidirectional_rnn/bw/lstm_cell/bias/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/encoding/bidirectional_rnn/bw/bw/while/lstm_cell/BiasAdd/Enter_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@encoding/bidirectional_rnn/bw/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_decoding/rnn/lstm_cell/kernel/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp&quot;\\n  input: &quot;decoding/rnn/lstm_cell/kernel/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/MatMul/Enter_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_decoding/rnn/lstm_cell/bias/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp&quot;\\n  input: &quot;decoding/rnn/lstm_cell/bias/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/decoding/rnn/while/lstm_cell/BiasAdd/Enter_grad/b_acc_3&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@decoding/rnn/lstm_cell/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dense/kernel/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;dense/kernel&quot;\\n  input: &quot;dense/kernel/RMSProp&quot;\\n  input: &quot;dense/kernel/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/dense/Tensordot/transpose_1_grad/transpose&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/kernel&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp/update_dense/bias/ApplyRMSProp&quot;\\n  op: &quot;ApplyRMSProp&quot;\\n  input: &quot;dense/bias&quot;\\n  input: &quot;dense/bias/RMSProp&quot;\\n  input: &quot;dense/bias/RMSProp_1&quot;\\n  input: &quot;optimization/RMSProp/learning_rate&quot;\\n  input: &quot;optimization/RMSProp/decay&quot;\\n  input: &quot;optimization/RMSProp/momentum&quot;\\n  input: &quot;optimization/RMSProp/epsilon&quot;\\n  input: &quot;optimization/gradients/dense/BiasAdd_grad/tuple/control_dependency_1&quot;\\n  attr {\\n    key: &quot;T&quot;\\n    value {\\n      type: DT_FLOAT\\n    }\\n  }\\n  attr {\\n    key: &quot;_class&quot;\\n    value {\\n      list {\\n        s: &quot;loc:@dense/bias&quot;\\n      }\\n    }\\n  }\\n  attr {\\n    key: &quot;use_locking&quot;\\n    value {\\n      b: false\\n    }\\n  }\\n}\\nnode {\\n  name: &quot;optimization/RMSProp&quot;\\n  op: &quot;NoOp&quot;\\n  input: &quot;^optimization/RMSProp/update_enc_embedding/SparseApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_dec_embedding/SparseApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_encoding/bidirectional_rnn/fw/lstm_cell/kernel/ApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_encoding/bidirectional_rnn/fw/lstm_cell/bias/ApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_encoding/bidirectional_rnn/bw/lstm_cell/kernel/ApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_encoding/bidirectional_rnn/bw/lstm_cell/bias/ApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_decoding/rnn/lstm_cell/kernel/ApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_decoding/rnn/lstm_cell/bias/ApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_dense/kernel/ApplyRMSProp&quot;\\n  input: &quot;^optimization/RMSProp/update_dense/bias/ApplyRMSProp&quot;\\n}\\n';\n",
       "          }\n",
       "        </script>\n",
       "        <link rel=&quot;import&quot; href=&quot;https://tensorboard.appspot.com/tf-graph-basic.build.html&quot; onload=load()>\n",
       "        <div style=&quot;height:600px&quot;>\n",
       "          <tf-graph-basic id=&quot;graph0.553986137258213&quot;></tf-graph-basic>\n",
       "        </div>\n",
       "    \"></iframe>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "show_graph(tf.get_default_graph().as_graph_def())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch   0 Loss:  1.039 Accuracy: 0.6031 Epoch duration:  5.724s\n",
      "Epoch   1 Loss:  0.701 Accuracy: 0.7328 Epoch duration:  5.365s\n",
      "Epoch   2 Loss:  0.580 Accuracy: 0.7844 Epoch duration:  5.459s\n",
      "Epoch   3 Loss:  0.439 Accuracy: 0.8422 Epoch duration:  5.431s\n",
      "Epoch   4 Loss:  0.344 Accuracy: 0.8766 Epoch duration:  5.362s\n",
      "Epoch   5 Loss:  0.346 Accuracy: 0.8750 Epoch duration:  5.380s\n",
      "Epoch   6 Loss:  0.262 Accuracy: 0.9008 Epoch duration:  5.407s\n",
      "Epoch   7 Loss:  0.232 Accuracy: 0.9047 Epoch duration:  5.367s\n",
      "Epoch   8 Loss:  0.145 Accuracy: 0.9563 Epoch duration:  5.410s\n",
      "Epoch   9 Loss:  0.089 Accuracy: 0.9789 Epoch duration:  5.364s\n"
     ]
    }
   ],
   "source": [
    "sess.run(tf.global_variables_initializer())\n",
    "epochs = 10\n",
    "for epoch_i in range(epochs):\n",
    "    start_time = time.time()\n",
    "    for batch_i, (source_batch, target_batch) in enumerate(batch_data(X_train, y_train, batch_size)):\n",
    "        _, batch_loss, batch_logits = sess.run([optimizer, loss, logits],\n",
    "            feed_dict = {inputs: source_batch,\n",
    "             outputs: target_batch[:, :-1],\n",
    "             targets: target_batch[:, 1:]})\n",
    "    accuracy = np.mean(batch_logits.argmax(axis=-1) == target_batch[:,1:])\n",
    "    print('Epoch {:3} Loss: {:>6.3f} Accuracy: {:>6.4f} Epoch duration: {:>6.3f}s'.format(epoch_i, batch_loss, \n",
    "                                                                      accuracy, time.time() - start_time))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Translate on test set"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy on test set is:  0.923\n"
     ]
    }
   ],
   "source": [
    "source_batch, target_batch = next(batch_data(X_test, y_test, batch_size))\n",
    "\n",
    "dec_input = np.zeros((len(source_batch), 1)) + char2numY['<GO>']\n",
    "for i in range(y_seq_length):\n",
    "    batch_logits = sess.run(logits,\n",
    "                feed_dict = {inputs: source_batch,\n",
    "                 outputs: dec_input})\n",
    "    prediction = batch_logits[:,-1].argmax(axis=-1)\n",
    "    dec_input = np.hstack([dec_input, prediction[:,None]])\n",
    "    \n",
    "print('Accuracy on test set is: {:>6.3f}'.format(np.mean(dec_input == target_batch)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's randomly take two from this test set and see what it spits out:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "num_preds = 2\n",
    "source_chars = [[num2charX[l] for l in sent if num2charX[l]!=\"<PAD>\"] for sent in source_batch[:num_preds]]\n",
    "dest_chars = [[num2charY[l] for l in sent] for sent in dec_input[:num_preds, 1:]]\n",
    "\n",
    "for date_in, date_out in zip(source_chars, dest_chars):\n",
    "    print(''.join(date_in)+' => '+''.join(date_out))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "source_batch[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
