{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We wish to find the Minimum Edit Distance (MED) between two strings. That is, given two strings, align them, and find the minimum operations from {Insert, Delete, Substitute} needed to get from the first string to the second string.\n",
    "Then, we want to find the actual operations done in order to reach this MED, e.g \"Insert 'A' at position 3\".\n",
    "\n",
    "<img src=\"./MinimumEditDistance1.jpg\" />\n",
    "\n",
    "We can try and achieve this goal using Dynamic Programming (DP) for optimal complexity as follows:\n",
    "Define:\n",
    "* String 1: $X$ of length $n$\n",
    "* String 2: $Y$ of length $m$\n",
    "* $D[i,j]$: Edit Distance between substrings $X[1 \\rightarrow i]$ and $Y[1 \\rightarrow j]$\n",
    "\n",
    "Using \"Bottom Up\" approach, the MED between $X$ and $Y$ would be $D[n,m]$.\n",
    "\n",
    "We assume that the distance between string of length 0 to a string of length k is k, since we need to insert k characters is order to create string 2.\n",
    "\n",
    "The matrix initially looks like this:\n",
    "\n",
    "<img src=\"./MinimumEditDistance2.jpg\" />\n",
    "\n",
    "In order to actually find the operation, we need to keep track of the operations, that is, create a \"Backtrace\":\n",
    "\n",
    "<img src=\"./MinimumEditDistance3.jpg\" />\n",
    "\n",
    "Complexity:\n",
    "\n",
    "* Time: O(n*m)\n",
    "* Space: O(n*m)\n",
    "* Backtrace: O(n+m)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\nAuthor: Tal Daniel\\nMinimum Edit Distance with Backtrace\\n-----------------------------------\\n\\nWe wish to find the Minimum Edit Distance (MED) between two strings. That is,\\ngiven two strings, align them, and find the minimum operations from {Insert,\\nDelete, Substitute} needed to get from the first string to the second string.\\nThen, we want to find the actual operations done in order to reach this MED,\\ne.g \"Insert \\'A\\' at position 3\".\\n\\nWe can try and achieve this goal using Dynamic Programming (DP) for optimal\\ncomplexity as follows: Define:\\n* String 1: $X$ of length $n$\\n* String 2: $Y$ of length $m$\\n* $D[i,j]$: Edit Distance between substrings $X[1 \\rightarrow i]$ and $Y[1 \\rightarrow j]$\\n\\nUsing \"Bottom Up\" approach, the MED between $X$ and $Y$ would be $D[n,m]$.\\n\\nWe assume that the distance between string of length 0 to a string of length k\\nis k, since we need to insert k characters is order to create string 2.  In\\norder to actually find the operation, we need to keep track of the operations,\\nthat is, create a \"Backtrace\".\\n\\n\\nComplexity:\\n\\n* Time: O(n*m)\\n* Space: O(n*m)\\n* Backtrace: O(n+m)\\n\\n'"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Py file:\n",
    "\"\"\"\n",
    "Author: Tal Daniel\n",
    "Minimum Edit Distance with Backtrace\n",
    "-----------------------------------\n",
    "\n",
    "We wish to find the Minimum Edit Distance (MED) between two strings. That is,\n",
    "given two strings, align them, and find the minimum operations from {Insert,\n",
    "Delete, Substitute} needed to get from the first string to the second string.\n",
    "Then, we want to find the actual operations done in order to reach this MED,\n",
    "e.g \"Insert 'A' at position 3\".\n",
    "\n",
    "We can try and achieve this goal using Dynamic Programming (DP) for optimal\n",
    "complexity as follows: Define:\n",
    "* String 1: $X$ of length $n$\n",
    "* String 2: $Y$ of length $m$\n",
    "* $D[i,j]$: Edit Distance between substrings $X[1 \\rightarrow i]$ and $Y[1 \\rightarrow j]$\n",
    "\n",
    "Using \"Bottom Up\" approach, the MED between $X$ and $Y$ would be $D[n,m]$.\n",
    "\n",
    "We assume that the distance between string of length 0 to a string of length k\n",
    "is k, since we need to insert k characters is order to create string 2.  In\n",
    "order to actually find the operation, we need to keep track of the operations,\n",
    "that is, create a \"Backtrace\".\n",
    "\n",
    "\n",
    "Complexity:\n",
    "\n",
    "* Time: O(n*m)\n",
    "* Space: O(n*m)\n",
    "* Backtrace: O(n+m)\n",
    "\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Imports:\n",
    "\n",
    "import numpy as np\n",
    "import string\n",
    "import json\n",
    "import csv\n",
    "import itertools\n",
    "import time\n",
    "from word2keypress import distance, Keyboard\n",
    "from ast import literal_eval"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def find_med_backtrace(str1, str2):\n",
    "    '''\n",
    "    This function calculates the Minimum Edit Distance between 2 words using\n",
    "    Dynamic Programming, and asserts the optimal transition path using backtracing.\n",
    "    Input parameters: original word, target word\n",
    "    Output: minimum edit distance, path\n",
    "    Example: ('password', 'Passw0rd') -> 2.0, [('s', 'P', 0), ('s', '0', 5)]\n",
    "    '''\n",
    "    # Definitions:\n",
    "    n = len(str1)\n",
    "    m = len(str2)\n",
    "    D = np.full((n + 1, m + 1), np.inf)\n",
    "    op_arr_str = [\"d\", \"i\", \"c\", \"s\"]\n",
    "    trace = np.full((n + 1, m + 1), None)\n",
    "    for i in range(1 , n + 1):\n",
    "        trace[i ,0] = (i - 1 ,0)\n",
    "    for j in range(1 , m + 1):\n",
    "        trace[0 ,j] = (0 ,j - 1)\n",
    "    # Initialization:\n",
    "    for i in range(n + 1):\n",
    "        D[i,0] = i\n",
    "    for j in range(m + 1):\n",
    "        D[0,j] = j\n",
    "    # Fill the matrices:\n",
    "    for i in range(1, n + 1):\n",
    "        for j in range(1, m + 1):\n",
    "            delete = D[i - 1, j] + 1\n",
    "            insert = D[i, j-1] + 1\n",
    "            if (str1[i - 1] == str2[j - 1]):\n",
    "                sub = np.inf\n",
    "                copy = D[i - 1, j - 1]\n",
    "            else:\n",
    "                sub = D[i - 1, j - 1] + 1\n",
    "                copy = np.inf\n",
    "            op_arr = [delete, insert, copy, sub]\n",
    "            D[i ,j] = np.min(op_arr)\n",
    "            op = np.argmin(op_arr)\n",
    "            if (op == 0):\n",
    "                # delete, go down\n",
    "                trace[i,j] = (i-1, j)\n",
    "            elif (op == 1):\n",
    "                # insert, go left\n",
    "                trace[i,j] = (i, j-1)\n",
    "            else:\n",
    "                # copy or subsitute, go diag\n",
    "                trace[i,j] = (i-1, j-1)\n",
    "#     print(trace)\n",
    "    # Find the path of transitions:\n",
    "    i = n\n",
    "    j = m\n",
    "    cursor = trace[i ,j]\n",
    "    path = []\n",
    "    while (cursor is not None):\n",
    "        # 3 possible directions:\n",
    "#         print(cursor)\n",
    "        if (cursor[0] == i - 1 and cursor[1] == j - 1):\n",
    "            # diagonal - sub or copy\n",
    "            if (str1[cursor[0]] != str2[cursor[1]]):\n",
    "                # substitute\n",
    "                path.append((\"s\", str2[cursor[1]], cursor[0]))\n",
    "            i = i - 1\n",
    "            j = j - 1\n",
    "        elif (cursor[0] == i and cursor[1] == j - 1):\n",
    "            # go left - insert\n",
    "            path.append((\"i\", str2[cursor[1]], cursor[0]))\n",
    "            j = j - 1\n",
    "        else:\n",
    "            # (cursor[0] == i - 1 and cursor[1] == j )\n",
    "            # go down - delete\n",
    "            path.append((\"d\", None, cursor[0]))\n",
    "            i = i - 1\n",
    "        cursor = trace[cursor[0], cursor[1]]\n",
    "    return D[n ,m], list(reversed(path))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Minimum Edit Distance with Backtrace\n",
    "def find_med_backtrace_kb(str1, str2):\n",
    "    '''\n",
    "    This function calculates the Minimum Edit Distance between 2 words using\n",
    "    Dynamic Programming, and asserts the optimal transition path using backtracing.\n",
    "    This version uses KeyPress representation.\n",
    "    Input parameters: original word, target word\n",
    "    Output: minimum edit distance, path\n",
    "    Example:\n",
    "    ('password', 'PASSword') -> 2.0 , [('i', '\\x04', 0), ('i', '\\x04', 4)]\n",
    "    '''\n",
    "    # Transform to keyboard representation:\n",
    "    kb = Keyboard()\n",
    "    str1 = kb.word_to_keyseq(str1)\n",
    "    str2 = kb.word_to_keyseq(str2)\n",
    "    # Definitions:\n",
    "    n = len(str1)\n",
    "    m = len(str2)\n",
    "    D = np.full((n + 1, m + 1), np.inf)\n",
    "    op_arr_str = [\"d\", \"i\", \"c\", \"s\"]\n",
    "    trace = np.full((n + 1, m + 1), None)\n",
    "    for i in range(1 , n + 1):\n",
    "        trace[i ,0] = (i - 1 ,0)\n",
    "    for j in range(1 , m + 1):\n",
    "        trace[0 ,j] = (0 ,j - 1)\n",
    "    # Initialization:\n",
    "    for i in range(n + 1):\n",
    "        D[i,0] = i\n",
    "    for j in range(m + 1):\n",
    "        D[0,j] = j\n",
    "    # Fill the matrices:\n",
    "    for i in range(1, n + 1):\n",
    "        for j in range(1, m + 1):\n",
    "            delete = D[i - 1, j] + 1\n",
    "            insert = D[i, j-1] + 1\n",
    "            if (str1[i - 1] == str2[j - 1]):\n",
    "                sub = np.inf\n",
    "                copy = D[i - 1, j - 1]\n",
    "            else:\n",
    "                sub = D[i - 1, j - 1] + 1\n",
    "                copy = np.inf\n",
    "            op_arr = [delete, insert, copy, sub]\n",
    "            D[i ,j] = np.min(op_arr)\n",
    "            op = np.argmin(op_arr)\n",
    "            if (op == 0):\n",
    "                # delete, go down\n",
    "                trace[i,j] = (i-1, j)\n",
    "            elif (op == 1):\n",
    "                # insert, go left\n",
    "                trace[i,j] = (i, j-1)\n",
    "            else:\n",
    "                # copy or subsitute, go diag\n",
    "                trace[i,j] = (i-1, j-1)\n",
    "#     print(trace)\n",
    "    # Find the path of transitions:\n",
    "    i = n\n",
    "    j = m\n",
    "    cursor = trace[i ,j]\n",
    "    path = []\n",
    "    while (cursor is not None):\n",
    "        # 3 possible directions:\n",
    "#         print(cursor)\n",
    "        if (cursor[0] == i - 1 and cursor[1] == j - 1):\n",
    "            # diagonal - sub or copy\n",
    "            if (str1[cursor[0]] != str2[cursor[1]]):\n",
    "                # substitute\n",
    "                path.append((\"s\", str2[cursor[1]], cursor[0]))\n",
    "            i = i - 1\n",
    "            j = j - 1\n",
    "        elif (cursor[0] == i and cursor[1] == j - 1):\n",
    "            # go left - insert\n",
    "            path.append((\"i\", str2[cursor[1]], cursor[0]))\n",
    "            j = j - 1\n",
    "        else:\n",
    "            # (cursor[0] == i - 1 and cursor[1] == j )\n",
    "            # go down - delete\n",
    "            path.append((\"d\", None, cursor[0]))\n",
    "            i = i - 1\n",
    "        cursor = trace[cursor[0], cursor[1]]\n",
    "    return D[n ,m], list(reversed(path))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2.0\n",
      "[('i', '\\x03', 0), ('s', '0', 5)]\n"
     ]
    }
   ],
   "source": [
    "med, path = find_med_backtrace_kb('password', 'Passw0rd')\n",
    "print(med)\n",
    "print(path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Decoder - given a word and a path of transition, recover the final word:\n",
    "def path2word(word, path):\n",
    "    '''\n",
    "    This function decodes the word in which the given path transitions the input word into.\n",
    "    Input parameters: original word, transition path\n",
    "    Output: decoded word\n",
    "    '''\n",
    "    if not path:\n",
    "        return word\n",
    "    final_word = []\n",
    "    word_len = len(word)\n",
    "    path_len = len(path)\n",
    "    i = 0\n",
    "    j = 0\n",
    "    while (i < word_len or j < path_len):\n",
    "        if (j < path_len and path[j][2] == i):\n",
    "            if (path[j][0] == \"s\"):\n",
    "                # substitute\n",
    "                final_word.append(path[j][1])\n",
    "                i += 1\n",
    "                j += 1\n",
    "            elif (path[j][0] == \"d\"):\n",
    "                # delete\n",
    "                i += 1\n",
    "                j += 1\n",
    "            else:\n",
    "                # \"i\", insert\n",
    "                final_word.append(path[j][1])\n",
    "                j += 1\n",
    "        else:\n",
    "            final_word.append(word[i])\n",
    "            i += 1\n",
    "    return ''.join(final_word)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Decoder - given a word and a path of transition, recover the final word: KEYPRESS Version\n",
    "def path2word_kb(word, path):\n",
    "    '''\n",
    "    This function decodes the word in which the given path transitions the input word into.\n",
    "    This is the KeyPress version, which handles the keyboard representations.\n",
    "    Input parameters: original word, transition path\n",
    "    Output: decoded word\n",
    "    '''\n",
    "    kb = Keyboard()\n",
    "    word = kb.word_to_keyseq(word)\n",
    "    if not path:\n",
    "        return kb.keyseq_to_word(word)\n",
    "    final_word = []\n",
    "    word_len = len(word)\n",
    "    path_len = len(path)\n",
    "    i = 0\n",
    "    j = 0\n",
    "    while (i < word_len or j < path_len):\n",
    "        if (j < path_len and path[j][2] == i):\n",
    "            if (path[j][0] == \"s\"):\n",
    "                # substitute\n",
    "                final_word.append(path[j][1])\n",
    "                i += 1\n",
    "                j += 1\n",
    "            elif (path[j][0] == \"d\"):\n",
    "                # delete\n",
    "                i += 1\n",
    "                j += 1\n",
    "            else:\n",
    "                # \"i\", insert\n",
    "                final_word.append(path[j][1])\n",
    "                j += 1\n",
    "        else:\n",
    "            final_word.append(word[i])\n",
    "            i += 1\n",
    "    return (kb.keyseq_to_word(''.join(final_word)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "4.0\n",
      "[('i', '\\x04', 0), ('s', '\\x03', 1), ('i', '2', 2), ('i', '\\x04', 4)]\n",
      "P@SSword\n"
     ]
    }
   ],
   "source": [
    "# Simple test:\n",
    "pair = (\"password\", \"P@SSword\")\n",
    "med, path = find_med_backtrace_kb(pair[0], pair[1])\n",
    "print(med)\n",
    "print(path)\n",
    "decoded_word = path2word_kb(pair[0], path)\n",
    "print(decoded_word)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_transition_dict():\n",
    "    '''\n",
    "    Generate a dictionary of all possible paths in a JSON format\n",
    "    Assumptions: words' max length is 30 chars and words are comprised of 98 available characters\n",
    "    'd' - ('d', None, 0-30) -> 31 options\n",
    "    's' - ('s', 0-95, 0-30) -> 98x31 = 3038 options\n",
    "    'i' - ('i', 0-95, 0-30) -> 98x31 = 3038 options\n",
    "    Size of table: 31 + 3038 + 3038 = 6107\n",
    "    '''\n",
    "    max_len = 31\n",
    "    d_list = [('d', None, i) for i in range(max_len)]\n",
    "    asci = list(string.ascii_letters)\n",
    "    punc = list(string.punctuation)\n",
    "    dig = list(string.digits)\n",
    "    chars = asci + punc + dig + [\" \", \"\\t\", \"\\x03\", \"\\x04\"]\n",
    "    s_list = [('s', c, i) for c in chars for i in range(max_len)]\n",
    "    i_list = [('i', c, i) for c in chars for i in range(max_len)]\n",
    "\n",
    "    transition_table = d_list + s_list + i_list\n",
    "    transition_dict_2idx = {}\n",
    "    transition_dict_2path = {}\n",
    "    for i in range(len(transition_table)):\n",
    "        transition_dict_2idx[str(transition_table[i])] = i\n",
    "        transition_dict_2path[i] = str(transition_table[i])\n",
    "    with open('trans_dict_2idx.json', 'w') as outfile:  \n",
    "        json.dump(transition_dict_2idx, outfile)\n",
    "    with open('trans_dict_2path.json', 'w') as outfile:  \n",
    "        json.dump(transition_dict_2path, outfile)\n",
    "    print(\"Transitions dictionary created as trans_dict_2idx.json & trans_dict_2path.json\")\n",
    "    '''\n",
    "    Read: \n",
    "    if filename:\n",
    "        with open(filename, 'r') as f:\n",
    "            transition_dict = json.load(f)\n",
    "    '''"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Transitions dictionary created as trans_dict_2idx.json & trans_dict_2path.json\n"
     ]
    }
   ],
   "source": [
    "generate_transition_dict()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "def path2idx(path, dictionary):\n",
    "    '''\n",
    "    This functions converts human-readable transition path to a\n",
    "    dictionary-indices path (for future use in RNNs).\n",
    "    Input parameters: human-readable path, dictionary\n",
    "    Output: dictionary-indices path\n",
    "    [('i', '\\x04', 0), ('s', '\\x03', 1), ('i', '2', 2), ('i', '\\x04', 4)] ->\n",
    "    [6076, 3008, 5737, 6080]\n",
    "    '''\n",
    "    idx_path = []\n",
    "    for p in path:\n",
    "        idx_path.append(dictionary[str(p)])\n",
    "    return idx_path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "def idx2path(path, dictionary):\n",
    "    '''\n",
    "    This functions converts dictionary-indices transition path to a\n",
    "    human-readable path (for future use in RNNs).\n",
    "    Input parameters: human-readable path, dictionary\n",
    "    Output: dictionary-indices path\n",
    "    [6076, 3008, 5737, 6080] ->\n",
    "    [('i', '\\x04', 0), ('s', '\\x03', 1), ('i', '2', 2), ('i', '\\x04', 4)]\n",
    "    '''\n",
    "    str_path = []\n",
    "    for i in path:\n",
    "        str_path.append(literal_eval(dictionary[str(i)]))\n",
    "    return str_path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "metadata": {},
   "outputs": [],
   "source": [
    "def idx2path_no_json(path, dictionary):\n",
    "    '''\n",
    "    This functions converts dictionary-indices transition path to a\n",
    "    human-readable path (for future use in RNNs).\n",
    "    Input parameters: human-readable path, dictionary\n",
    "    Output: dictionary-indices path\n",
    "    [6076, 3008, 5737, 6080] ->\n",
    "    [('i', '\\x04', 0), ('s', '\\x03', 1), ('i', '2', 2), ('i', '\\x04', 4)]\n",
    "    '''\n",
    "    str_path = []\n",
    "    path = literal_eval(path)\n",
    "    for i in path:\n",
    "        str_path.append(dictionary[str(i)])\n",
    "    return str_path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[6076, 3008, 5737, 6080]\n"
     ]
    }
   ],
   "source": [
    "with open('trans_dict_2idx.json', 'r') as f:\n",
    "    tran_dict = json.load(f)\n",
    "idx_path = path2idx(path, tran_dict)\n",
    "print(idx_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('i', '\\x04', 0), ('s', '\\x03', 1), ('i', '2', 2), ('i', '\\x04', 4)]\n"
     ]
    }
   ],
   "source": [
    "with open('trans_dict_2path.json', 'r') as f:\n",
    "    tran_dict = json.load(f)\n",
    "str_path = idx2path(idx_path, tran_dict)\n",
    "print(str_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2pws_pairs(csv_fpath):\n",
    "    '''\n",
    "    Function to parse the csv file, such that every row is a list of username and a string of passwords list.\n",
    "    Using itertools, find all the combinations of passwords, and generate an appropriate path.\n",
    "    For every password and path, build the output password, and compare the result with the original pair.\n",
    "    Input parameter: path to original dataset csv\n",
    "    '''\n",
    "    #     csv_fpath = './sample_username_list_tr.csv'\n",
    "    pws_pairs = []\n",
    "    with open(csv_fpath) as csv_file:\n",
    "        csv_reader = csv.reader(csv_file, delimiter=',')\n",
    "        line_count = 0\n",
    "        for row in csv_reader:\n",
    "            if (len(row) != 2):\n",
    "                print(\"File format error!\")\n",
    "                break\n",
    "            username = row[0]\n",
    "            pws_string = row[1]\n",
    "            pws_list = json.loads(pws_string)\n",
    "            pws_pairs.extend(list(itertools.permutations(pws_list,2)))\n",
    "    return pws_pairs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2pws_pairs_gen(csv_fpath):\n",
    "    '''\n",
    "    Generator function to parse the csv file, such that every row is a list of username and a string of passwords list.\n",
    "    Using itertools, find all the combinations of passwords, and generate an appropriate path.\n",
    "    For every password and path, build the output password, and compare the result with the original pair.\n",
    "    Input parameter: path to original dataset csv\n",
    "    '''\n",
    "    #     csv_fpath = './sample_username_list_tr.csv'\n",
    "    #     pws_pairs = []\n",
    "    with open(csv_fpath) as csv_file:\n",
    "        csv_reader = csv.reader(csv_file, delimiter=',')\n",
    "        for i, row in enumerate(csv_reader):\n",
    "            if (len(row) != 2):\n",
    "                print(\"File format error @ line {}\\n{!r}!\".format(i, row))\n",
    "                break\n",
    "            username, pws_string = row\n",
    "            pws_list = json.loads(pws_string)\n",
    "            for p in itertools.permutations(pws_list, 2):\n",
    "                yield p"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2pws_pairs_gen_no_json(csv_fpath):\n",
    "    '''\n",
    "    Generator function to parse the csv file, such that every row is a list of username and a string of passwords list.\n",
    "    Using itertools, find all the combinations of passwords, and generate an appropriate path.\n",
    "    For every password and path, build the output password, and compare the result with the original pair.\n",
    "    Input parameter: path to original dataset csv\n",
    "    '''\n",
    "    #     csv_fpath = './sample_username_list_tr.csv'\n",
    "    #     pws_pairs = []\n",
    "    with open(csv_fpath) as csv_file:\n",
    "        csv_reader = csv.reader(csv_file, delimiter=',')\n",
    "        for i, row in enumerate(csv_reader):\n",
    "            if (len(row) != 2):\n",
    "                print(\"File format error @ line {}\\n{!r}!\".format(i, row))\n",
    "                break\n",
    "            if not i:\n",
    "                username, pws_string = \"\", \"[]\"\n",
    "            else:\n",
    "                username, pws_string = row\n",
    "            pws_list = eval(pws_string)\n",
    "            for p in itertools.permutations(pws_list, 2):\n",
    "                yield p"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "def run_test(csv_fpath):\n",
    "    '''\n",
    "    This function tests the encoder-decoder functions, in order to make sure\n",
    "    that for evey transition path from pass1 to pass2, the decoded password from pass1\n",
    "    and the transition path is the same as pass2.\n",
    "    '''\n",
    "    start = time.clock()\n",
    "    pws_pairs_gen = csv2pws_pairs_gen(csv_fpath)\n",
    "    for i, pair in enumerate(pws_pairs_gen):\n",
    "        med, path = find_med_backtrace_kb(pair[0], pair[1])\n",
    "        decoded_word = path2word_kb(pair[0], path)\n",
    "        if (decoded_word != pair[1]):\n",
    "            print(\"Test failed on: {}\".format(pair))\n",
    "            print(\"Path chosen: {}\".format(path))\n",
    "            print(\"Decoded Password: {}\".format(decoded_word))\n",
    "    print(\"Testing done in {} seconds on a total of {} passwords pairs\"\n",
    "          .format(time.clock() - start, i))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Testing done in 152.82311970842693 seconds on a total of 179657 passwords pairs\n"
     ]
    }
   ],
   "source": [
    "# Testing\n",
    "csv_fpath = './sample_username_list_tr.csv'\n",
    "run_test(csv_fpath)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate new dataset:\n",
    "def csv2dataset(csv_fpath):\n",
    "    '''\n",
    "    This function generates the new dataset format from the original one.\n",
    "    The new dataset is in the form: [pass1, pass2, human-readable transition path].\n",
    "    Input parameter: path to original dataset csv\n",
    "    '''\n",
    "    kb = Keyboard()\n",
    "    print(\"Started building dataset...\")\n",
    "    start = time.clock()\n",
    "    dataset = []\n",
    "    pws_pairs = csv2pws_pairs(csv_fpath)\n",
    "    with open('trans_dataset.csv', 'w', newline='') as csvfile:\n",
    "        for i, pair in enumerate(pws_pairs):\n",
    "            med, path = find_med_backtrace_kb(pair[0], pair[1])\n",
    "            decoded_word = path2word_kb(pair[0], path)\n",
    "            str_path = [str(p) for p in path]\n",
    "            if (decoded_word != pair[1]):\n",
    "                print(\"Test failed on: {}\".format(pair))\n",
    "                print(\"Path chosen: {}\".format(path))\n",
    "                print(\"Decoded Password: {}\".format(decoded_word))\n",
    "            else:\n",
    "                dataset.append((pair[0], pair[1], path))\n",
    "            if (i % 50000 == 0):\n",
    "                print(\"Progress: {}%\".format((i / len(pws_pairs))*100.0))\n",
    "                #  print(str_path)\n",
    "            csv_writer = csv.writer(csvfile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n",
    "            csv_writer.writerow([json.dumps(pair[0]), json.dumps(pair[1]), json.dumps(str_path)])\n",
    "    print(\"Dataset created in {} seconds on a total of {} passwords pairs\".format(time.clock() - start, len(pws_pairs)))\n",
    "    print(\"New Dataset CSV file: trans_dataset.csv\")\n",
    "    return dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2dataset_gen(csv_fpath):\n",
    "    '''\n",
    "    This (generator) function generates the new dataset format from the original one.\n",
    "    The new dataset is in the form: [pass1, pass2, human-readable transition path].\n",
    "    Input parameter: path to original dataset csv\n",
    "    '''\n",
    "    kb = Keyboard()\n",
    "    print(\"Started building dataset...\")\n",
    "    start = time.clock()\n",
    "    pairs_generator = csv2pws_pairs_gen(csv_fpath)\n",
    "    with open('trans_dataset.csv', 'w', newline='') as csvfile:\n",
    "        csv_writer = csv.writer(\n",
    "            csvfile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL\n",
    "        )\n",
    "        for i, pair in enumerate(pairs_generator):\n",
    "            med, path = find_med_backtrace_kb(pair[0], pair[1])\n",
    "            decoded_word = path2word_kb(pair[0], path)\n",
    "            str_path = [str(p) for p in path]\n",
    "            if (decoded_word != pair[1]):\n",
    "                print(\"Test failed on: {}\".format(pair))\n",
    "                print(\"Path chosen: {}\".format(path))\n",
    "                print(\"Decoded Password: {}\".format(decoded_word))\n",
    "            if (i % 50000 == 0):\n",
    "                print(\"Progress: processed {} pairs so far\".format(i))\n",
    "            csv_writer.writerow([pair[0], pair[1], json.dumps(str_path)])\n",
    "    print(\"Dataset created in {} seconds on a total of {} passwords pairs\".format(time.clock() - start, i))\n",
    "    print(\"New Dataset CSV file: trans_dataset.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2dataset_dict_gen(csv_fpath, dict_json):\n",
    "    '''\n",
    "    This (generator) function generates the new dataset format from the original one.\n",
    "    The new dataset is in the form: [pass1, pass2, dictionary-indices transition path].\n",
    "    Input parameter: path to original dataset csv, path to the json dictionary file\n",
    "    '''\n",
    "    kb = Keyboard()\n",
    "    print(\"Started building dataset...\")\n",
    "    start = time.clock()\n",
    "    pairs_generator = csv2pws_pairs_gen(csv_fpath)\n",
    "    with open(dict_json, 'r') as f:\n",
    "        trans_dict = json.load(f)\n",
    "    with open('trans_dataset.csv', 'w', newline='') as csvfile:\n",
    "        csv_writer = csv.writer(\n",
    "            csvfile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL\n",
    "        )\n",
    "        for i, pair in enumerate(pairs_generator):\n",
    "            if not i:\n",
    "                # skip first line\n",
    "                continue\n",
    "            if (len(pair[0]) > 30 or len(pair[1]) > 30):\n",
    "                continue\n",
    "            med, path = find_med_backtrace_kb(pair[0], pair[1])\n",
    "            decoded_word = path2word_kb(pair[0], path)\n",
    "            str_path = [str(p) for p in path]\n",
    "            pws_indices = path2idx(str_path, trans_dict)\n",
    "            if (decoded_word != pair[1]):\n",
    "                print(\"Test failed on: {}\".format(pair))\n",
    "                print(\"Path chosen: {}\".format(path))\n",
    "                print(\"Decoded Password: {}\".format(decoded_word))\n",
    "            if (i % 50000 == 0):\n",
    "                print(\"Progress: processed {} pairs so far\".format(i))\n",
    "            csv_writer.writerow([\n",
    "                pair[0], pair[1], json.dumps(pws_indices)\n",
    "            ])\n",
    "    print(\"Dataset created in {} seconds on a total of {} passwords pairs\".format(time.clock() - start, i))\n",
    "    print(\"New Dataset CSV file: trans_dataset.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 123,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2dataset_dict_gen_no_json(csv_fpath, dict_json):\n",
    "    '''\n",
    "    This (generator) function generates the new dataset format from the original one.\n",
    "    The new dataset is in the form: [pass1, pass2, dictionary-indices transition path].\n",
    "    Input parameter: path to original dataset csv, path to the json dictionary file\n",
    "    '''\n",
    "    kb = Keyboard()\n",
    "    print(\"Started building dataset...\")\n",
    "    start = time.clock()\n",
    "    pairs_generator = csv2pws_pairs_gen_no_json(csv_fpath)\n",
    "    with open(dict_json, 'r') as f:\n",
    "        trans_dict = json.load(f)\n",
    "    with open('trans_dataset.csv', 'w', newline='') as csvfile:\n",
    "        csv_writer = csv.writer(\n",
    "            csvfile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL\n",
    "        )\n",
    "        for i, pair in enumerate(pairs_generator):\n",
    "            if not i:\n",
    "                # skip first line\n",
    "                continue\n",
    "            if (len(pair[0]) > 30 or len(pair[1]) > 30):\n",
    "                continue\n",
    "            med, path = find_med_backtrace_kb(pair[0], pair[1])\n",
    "            skip = False\n",
    "            for p in path:\n",
    "                if p[2] > 30:\n",
    "                    skip = True\n",
    "                if not trans_dict.get(str(p)):\n",
    "                    # Key not in dictionary\n",
    "                    skip = True\n",
    "            if skip:\n",
    "                continue\n",
    "            decoded_word = path2word_kb(pair[0], path)\n",
    "            str_path = [str(p) for p in path]\n",
    "            pws_indices = path2idx(str_path, trans_dict)\n",
    "            if (decoded_word != pair[1]):\n",
    "                print(\"Test failed on: {}\".format(pair))\n",
    "                print(\"Path chosen: {}\".format(path))\n",
    "                print(\"Decoded Password: {}\".format(decoded_word))\n",
    "            if (i % 50000 == 0):\n",
    "                print(\"Progress: processed {} pairs so far\".format(i))\n",
    "            csv_writer.writerow([\n",
    "                pair[0], pair[1], json.dumps(pws_indices)\n",
    "            ])\n",
    "    print(\"Dataset created in {} seconds on a total of {} passwords pairs\".format(time.clock() - start, i))\n",
    "    print(\"New Dataset CSV file: trans_dataset.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Started building dataset...\n",
      "Dataset created in 3.17625931085513 seconds on a total of 3707 passwords pairs\n",
      "New Dataset CSV file: trans_dataset.csv\n"
     ]
    }
   ],
   "source": [
    "csv_fpath = './cleaned_user_pass_tr_50.csv'\n",
    "dict2idx_json = 'trans_dict_2idx.json'\n",
    "dict2path_json = 'trans_dict_2path.json'\n",
    "# csv2dataset_gen(csv_fpath)\n",
    "csv2dataset_dict_gen_no_json(csv_fpath, dict2idx_json)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2trans_dataset(dataset_csv, dict_json):\n",
    "    '''\n",
    "    This function reads and processes the new dataset from a csv file,\n",
    "    and parses the human-readable transition path into a dictionary indices transition path,\n",
    "    using an input dictionary file.\n",
    "    A sample is now a tuple in the form of:\n",
    "    [pass1, pass2, human-readable transition path, dictionary indices transition path]\n",
    "    Input parameters: path to the dataset csv file, path to the json dictionary file.\n",
    "    Output: list of tuples in the mentioned form.\n",
    "    '''\n",
    "    with open(dict_json, 'r') as f:\n",
    "        trans_dict = json.load(f)\n",
    "    samples = []  # (pass1, pass2, path, idx_path)\n",
    "    with open(dataset_csv) as csv_file:\n",
    "        csv_reader = csv.reader(csv_file, delimiter=',')\n",
    "        for row in csv_reader:\n",
    "            if (len(row) != 3):\n",
    "                print(\"File format error!\")\n",
    "                break\n",
    "            pass_1 = row[0]\n",
    "            pass_2 = row[1]\n",
    "            pws_str = row[2]\n",
    "            pws_list = json.loads(pws_str)\n",
    "            pws_indices = path2idx(pws_list, trans_dict)\n",
    "            samples.append((pass_1, pass_2, pws_list, pws_indices))\n",
    "    return samples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2trans_dataset_gen(dataset_csv, dict_json):\n",
    "    '''\n",
    "    This (generator) function reads and processes the new dataset from a csv file,\n",
    "    and parses the human-readable transition path into a dictionary indices transition path,\n",
    "    using an input dictionary file.\n",
    "    A sample is now a tuple in the form of:\n",
    "    [pass1, pass2, human-readable transition path, dictionary indices transition path]\n",
    "    The csv file is in the form:\n",
    "    [pass1, pass2, human-readable transition path]\n",
    "    Input parameters: path to the dataset csv file, path to the json dictionary file.\n",
    "    Output: yields a tuple of the mentiond form\n",
    "    '''\n",
    "    with open(dict_json, 'r') as f:\n",
    "        trans_dict = json.load(f)\n",
    "    with open(dataset_csv) as csv_file:\n",
    "        csv_reader = csv.reader(csv_file, delimiter=',')\n",
    "        for row in csv_reader:\n",
    "            if (len(row) != 3):\n",
    "                print(\"File format error!\")\n",
    "                break\n",
    "            pass_1, pass_2, pws_str = row\n",
    "            pws_list = json.loads(pws_str)\n",
    "            pws_indices = path2idx(pws_list, trans_dict)\n",
    "            yield (pass_1, pass_2, pws_list, pws_indices)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2trans_dataset_dict_gen(dataset_csv, dict2path_json):\n",
    "    '''\n",
    "    This (generator) function reads and processes the new dataset from a csv file,\n",
    "    and parses the dictionary indices transition path into a human-readable path.\n",
    "    using an input dictionary file.\n",
    "    A sample is now a tuple in the form of:\n",
    "    [pass1, pass2, human-readable transition path, dictionary indices transition path]\n",
    "    The csv file is in the form:\n",
    "    [pass1, pass2, human-readable transition path]\n",
    "    Input parameters: path to the dataset csv file, path to the json dictionary file.\n",
    "    Output: yields a tuple of the mentiond form\n",
    "    '''\n",
    "    with open(dict2path_json, 'r') as f:\n",
    "        trans_dict = json.load(f)\n",
    "    with open(dataset_csv) as csv_file:\n",
    "        csv_reader = csv.reader(csv_file, delimiter=',')\n",
    "        for row in csv_reader:\n",
    "            if (len(row) != 3):\n",
    "                print(\"File format error!\")\n",
    "                break\n",
    "            pass_1, pass_2, pws_indices = row\n",
    "            pws_list = idx2path(pws_indices, trans_dict)\n",
    "            yield (pass_1, pass_2, pws_list, pws_indices)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv2trans_dataset_dict_gen_no_json(dataset_csv, dict2path_json):\n",
    "    '''\n",
    "    This (generator) function reads and processes the new dataset from a csv file,\n",
    "    and parses the dictionary indices transition path into a human-readable path.\n",
    "    using an input dictionary file.\n",
    "    A sample is now a tuple in the form of:\n",
    "    [pass1, pass2, human-readable transition path, dictionary indices transition path]\n",
    "    The csv file is in the form:\n",
    "    [pass1, pass2, human-readable transition path]\n",
    "    Input parameters: path to the dataset csv file, path to the json dictionary file.\n",
    "    Output: yields a tuple of the mentiond form\n",
    "    '''\n",
    "    with open(dict2path_json, 'r') as f:\n",
    "        trans_dict = json.load(f)\n",
    "    with open(dataset_csv) as csv_file:\n",
    "        csv_reader = csv.reader(csv_file, delimiter=',')\n",
    "        for row in csv_reader:\n",
    "            if (len(row) != 3):\n",
    "                print(\"File format error!\")\n",
    "                break\n",
    "            pass_1, pass_2, pws_indices = row\n",
    "            pws_list = idx2path_no_json(pws_indices, trans_dict)\n",
    "            yield (pass_1, pass_2, pws_list, pws_indices)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "('mama20062010', '19101970', [\"('s', '1', 0)\", \"('s', '9', 1)\", \"('s', '1', 2)\", \"('d', None, 3)\", \"('d', None, 4)\", \"('s', '1', 6)\", \"('s', '9', 7)\", \"('s', '7', 8)\", \"('d', None, 10)\", \"('d', None, 11)\"], '[2666, 2915, 2668, 3, 4, 2672, 2921, 2860, 10, 11]')\n"
     ]
    }
   ],
   "source": [
    "# Usage example:\n",
    "\n",
    "dataset_csv = './trans_dataset.csv'\n",
    "dict_json = 'trans_dict_2path.json'\n",
    "# samples = csv2trans_dataset(dataset_csv, dict_json)\n",
    "\n",
    "# # Using an array in memory:\n",
    "# print(\"Password 1: {}, Password 2: {}\".format(samples[4][0], samples[4][1]))\n",
    "# print(\"Readable Transition Path: {}\".format(samples[4][2]))\n",
    "# print(\"Sequential Transition Path: {}\".format(samples[4][3]))\n",
    "\n",
    "# Using the generator:\n",
    "samples_generator = csv2trans_dataset_dict_gen_no_json(dataset_csv, dict_json)\n",
    "for i, sample in enumerate(samples_generator):\n",
    "    if (i % 50000 == 0):\n",
    "        print(sample)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Steps\n",
    "## When running on the server\n",
    "\n",
    "1. Generate dictionaries using: generate_transition_dict() [creates 2 dictionaries: path2idx, idx2path]\n",
    "2. Create the new dataset using: csv2dataset_dict_gen(csv_fpath, dict2idx_json)\n",
    "3. In order to generate samples: samples_generator = csv2trans_dataset_dict_gen(dataset_csv, dict2path_json)\n",
    "4. Samples are in the form: [pass1, pass2, human-readable transition path, dictionary indices transition path], take what cells you need for training/testing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Key Not Found\n"
     ]
    }
   ],
   "source": [
    "dict_json = 'trans_dict_2idx.json'\n",
    "with open(dict_json, 'r') as f:\n",
    "    trans_dict = json.load(f)\n",
    "p = ('s', '\\x0e', 7)\n",
    "if not trans_dict.get(str(p)):\n",
    "    # Key not in dictionary\n",
    "    print(\"Key Not Found\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "10"
      ]
     },
     "execution_count": 126,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(string.digits)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
