{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's now count how many deleterious and rare de novo variants each trio has. We'll start with GATK hiConf: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "from scipy import stats\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# GATK high conf\n",
    "\n",
    "First, create the annotations. We can do all the grepping later:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "\n",
    "module load annovar\n",
    "\n",
    "cd /data/NCR_SBRB/simplex/gatk_refine\n",
    "suffix=hiConfDeNovo\n",
    "\n",
    "while read trio; do\n",
    "    convert2annovar.pl -format vcf4old ${trio}_${suffix}.vcf > ${trio}_${suffix}.avinput;\n",
    "    table_annovar.pl ${trio}_${suffix}.avinput $ANNOVAR_DATA/hg19 \\\n",
    "        -protocol refGene,dbnsfp30a,popfreq_max_20150413 -operation g,f,f \\\n",
    "        -build hg19 -nastring .\n",
    "done < /data/NCR_SBRB/simplex/trio_ids.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we have just big matrices that we can subset in whatever way we want. For now, let's count how many deleterious, rare variants they have:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 145,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "10033_trio1: 69 de novo SNVs, 9 rare (MAF<.01), 50 not in PopFreqMax, 1 deleterious, 68 not in dbNSFP\n",
      "10033_trio2: 99 de novo SNVs, 18 rare (MAF<.01), 65 not in PopFreqMax, 1 deleterious, 96 not in dbNSFP\n",
      "10042_trio1: 81 de novo SNVs, 17 rare (MAF<.01), 47 not in PopFreqMax, 5 deleterious, 74 not in dbNSFP\n",
      "10090_trio1: 89 de novo SNVs, 16 rare (MAF<.01), 43 not in PopFreqMax, 3 deleterious, 86 not in dbNSFP\n",
      "10090_trio2: 63 de novo SNVs, 11 rare (MAF<.01), 33 not in PopFreqMax, 0 deleterious, 63 not in dbNSFP\n",
      "10094_trio1: 80 de novo SNVs, 18 rare (MAF<.01), 41 not in PopFreqMax, 2 deleterious, 76 not in dbNSFP\n",
      "10094_trio2: 55 de novo SNVs, 13 rare (MAF<.01), 27 not in PopFreqMax, 2 deleterious, 51 not in dbNSFP\n",
      "10128_trio1: 54 de novo SNVs, 11 rare (MAF<.01), 33 not in PopFreqMax, 2 deleterious, 49 not in dbNSFP\n",
      "10128_trio2: 52 de novo SNVs, 7 rare (MAF<.01), 35 not in PopFreqMax, 1 deleterious, 51 not in dbNSFP\n",
      "10131_trio1: 48 de novo SNVs, 9 rare (MAF<.01), 23 not in PopFreqMax, 2 deleterious, 44 not in dbNSFP\n",
      "10131_trio2: 65 de novo SNVs, 10 rare (MAF<.01), 39 not in PopFreqMax, 1 deleterious, 62 not in dbNSFP\n",
      "10131_trio3: 39 de novo SNVs, 3 rare (MAF<.01), 29 not in PopFreqMax, 0 deleterious, 37 not in dbNSFP\n",
      "10131_trio4: 86 de novo SNVs, 22 rare (MAF<.01), 58 not in PopFreqMax, 6 deleterious, 67 not in dbNSFP\n",
      "10153_trio1: 70 de novo SNVs, 10 rare (MAF<.01), 47 not in PopFreqMax, 1 deleterious, 69 not in dbNSFP\n",
      "10153_trio2: 75 de novo SNVs, 22 rare (MAF<.01), 42 not in PopFreqMax, 1 deleterious, 72 not in dbNSFP\n",
      "10153_trio3: 69 de novo SNVs, 13 rare (MAF<.01), 37 not in PopFreqMax, 2 deleterious, 64 not in dbNSFP\n",
      "10164_trio1: 122 de novo SNVs, 18 rare (MAF<.01), 61 not in PopFreqMax, 2 deleterious, 114 not in dbNSFP\n",
      "10164_trio2: 103 de novo SNVs, 18 rare (MAF<.01), 50 not in PopFreqMax, 3 deleterious, 97 not in dbNSFP\n",
      "10173_trio1: 101 de novo SNVs, 22 rare (MAF<.01), 44 not in PopFreqMax, 1 deleterious, 97 not in dbNSFP\n",
      "10173_trio2: 112 de novo SNVs, 28 rare (MAF<.01), 53 not in PopFreqMax, 6 deleterious, 98 not in dbNSFP\n",
      "10178_trio1: 519 de novo SNVs, 59 rare (MAF<.01), 106 not in PopFreqMax, 5 deleterious, 511 not in dbNSFP\n",
      "10178_trio2: 354 de novo SNVs, 31 rare (MAF<.01), 78 not in PopFreqMax, 6 deleterious, 348 not in dbNSFP\n",
      "10182_trio1: 90 de novo SNVs, 12 rare (MAF<.01), 37 not in PopFreqMax, 0 deleterious, 89 not in dbNSFP\n",
      "10182_trio2: 77 de novo SNVs, 11 rare (MAF<.01), 37 not in PopFreqMax, 1 deleterious, 76 not in dbNSFP\n",
      "10182_trio3: 72 de novo SNVs, 14 rare (MAF<.01), 36 not in PopFreqMax, 1 deleterious, 68 not in dbNSFP\n",
      "10197_trio1: 416 de novo SNVs, 59 rare (MAF<.01), 77 not in PopFreqMax, 2 deleterious, 412 not in dbNSFP\n",
      "10197_trio2: 244 de novo SNVs, 32 rare (MAF<.01), 66 not in PopFreqMax, 1 deleterious, 239 not in dbNSFP\n",
      "10215_trio1: 201 de novo SNVs, 27 rare (MAF<.01), 63 not in PopFreqMax, 5 deleterious, 192 not in dbNSFP\n",
      "10215_trio2: 128 de novo SNVs, 19 rare (MAF<.01), 36 not in PopFreqMax, 4 deleterious, 124 not in dbNSFP\n",
      "10215_trio3: 147 de novo SNVs, 19 rare (MAF<.01), 48 not in PopFreqMax, 2 deleterious, 144 not in dbNSFP\n",
      "10215_trio4: 186 de novo SNVs, 27 rare (MAF<.01), 48 not in PopFreqMax, 2 deleterious, 182 not in dbNSFP\n",
      "10369_trio1: 948 de novo SNVs, 247 rare (MAF<.01), 478 not in PopFreqMax, 46 deleterious, 886 not in dbNSFP\n",
      "10369_trio2: 996 de novo SNVs, 284 rare (MAF<.01), 474 not in PopFreqMax, 43 deleterious, 935 not in dbNSFP\n",
      "10406_trio1: 119 de novo SNVs, 31 rare (MAF<.01), 58 not in PopFreqMax, 2 deleterious, 114 not in dbNSFP\n",
      "10406_trio2: 81 de novo SNVs, 20 rare (MAF<.01), 41 not in PopFreqMax, 2 deleterious, 78 not in dbNSFP\n",
      "10406_trio3: 103 de novo SNVs, 22 rare (MAF<.01), 57 not in PopFreqMax, 1 deleterious, 101 not in dbNSFP\n",
      "10448_trio1: 69 de novo SNVs, 14 rare (MAF<.01), 40 not in PopFreqMax, 0 deleterious, 69 not in dbNSFP\n",
      "10448_trio2: 70 de novo SNVs, 10 rare (MAF<.01), 40 not in PopFreqMax, 0 deleterious, 67 not in dbNSFP\n",
      "10459_trio2: 93 de novo SNVs, 22 rare (MAF<.01), 51 not in PopFreqMax, 3 deleterious, 88 not in dbNSFP\n",
      "1892_trio1: 91 de novo SNVs, 23 rare (MAF<.01), 40 not in PopFreqMax, 2 deleterious, 88 not in dbNSFP\n",
      "1892_trio2: 61 de novo SNVs, 8 rare (MAF<.01), 35 not in PopFreqMax, 2 deleterious, 58 not in dbNSFP\n",
      "1893_trio1: 95 de novo SNVs, 19 rare (MAF<.01), 49 not in PopFreqMax, 2 deleterious, 92 not in dbNSFP\n",
      "1893_trio2: 105 de novo SNVs, 21 rare (MAF<.01), 52 not in PopFreqMax, 2 deleterious, 101 not in dbNSFP\n",
      "1895_trio1: 82 de novo SNVs, 15 rare (MAF<.01), 46 not in PopFreqMax, 3 deleterious, 78 not in dbNSFP\n",
      "1895_trio2: 74 de novo SNVs, 20 rare (MAF<.01), 47 not in PopFreqMax, 0 deleterious, 70 not in dbNSFP\n",
      "1976_trio1: 91 de novo SNVs, 13 rare (MAF<.01), 58 not in PopFreqMax, 2 deleterious, 89 not in dbNSFP\n",
      "1976_trio2: 96 de novo SNVs, 16 rare (MAF<.01), 47 not in PopFreqMax, 2 deleterious, 92 not in dbNSFP\n",
      "1976_trio3: 69 de novo SNVs, 8 rare (MAF<.01), 45 not in PopFreqMax, 2 deleterious, 66 not in dbNSFP\n",
      "855_trio1: 79 de novo SNVs, 13 rare (MAF<.01), 51 not in PopFreqMax, 5 deleterious, 71 not in dbNSFP\n",
      "855_trio2: 104 de novo SNVs, 24 rare (MAF<.01), 55 not in PopFreqMax, 2 deleterious, 101 not in dbNSFP\n"
     ]
    }
   ],
   "source": [
    "fid = open('/data/NCR_SBRB/simplex/trio_ids.txt', 'r')\n",
    "trios = [t.rstrip() for t in fid]\n",
    "fid.close()\n",
    "\n",
    "var_count = {}\n",
    "for trio in trios:\n",
    "    df = pd.read_table('/data/NCR_SBRB/simplex/gatk_refine/%s_hiConfDeNovo.avinput.hg19_multianno.txt' % trio)\n",
    "    pred_cols = [i for i, col in enumerate(df.columns) if col.find('_pred') > 0]\n",
    "\n",
    "    mask = df[pred_cols] == 'D'\n",
    "    del_idx = mask.any(axis=1)\n",
    "\n",
    "    df.loc[df['PopFreqMax']=='.', 'PopFreqMax'] = np.nan\n",
    "    rare_idx = (df['PopFreqMax'].astype(float) < .01) | pd.isnull(df['PopFreqMax'])\n",
    "\n",
    "    keep_me = del_idx & rare_idx\n",
    "    var_count[trio] = np.sum(keep_me)\n",
    "    print '%s: %d de novo SNVs, ' % (trio, df.shape[0]) + \\\n",
    "        '%d rare (MAF<.01), ' % np.sum((df['PopFreqMax'].astype(float) < .01)) + \\\n",
    "        '%d not in PopFreqMax, ' % np.sum(pd.isnull(df['PopFreqMax'])) + \\\n",
    "        '%d deleterious, ' % np.sum(del_idx) + \\\n",
    "        '%d not in dbNSFP' % np.sum(np.all(df[pred_cols] == '.', axis=1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is for Philip's presentation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 157,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>de novo ADHD</th>\n",
       "      <th>rare ADHD</th>\n",
       "      <th>delet ADHD</th>\n",
       "      <th>de novo NV</th>\n",
       "      <th>rare NV</th>\n",
       "      <th>delet NV</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>10033</th>\n",
       "      <td>69</td>\n",
       "      <td>59</td>\n",
       "      <td>1</td>\n",
       "      <td>99</td>\n",
       "      <td>83</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10090</th>\n",
       "      <td>89</td>\n",
       "      <td>59</td>\n",
       "      <td>2</td>\n",
       "      <td>63</td>\n",
       "      <td>44</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10094</th>\n",
       "      <td>80</td>\n",
       "      <td>59</td>\n",
       "      <td>2</td>\n",
       "      <td>55</td>\n",
       "      <td>40</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10128</th>\n",
       "      <td>54</td>\n",
       "      <td>44</td>\n",
       "      <td>2</td>\n",
       "      <td>52</td>\n",
       "      <td>42</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10131</th>\n",
       "      <td>48</td>\n",
       "      <td>32</td>\n",
       "      <td>2</td>\n",
       "      <td>65</td>\n",
       "      <td>49</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10153</th>\n",
       "      <td>70</td>\n",
       "      <td>57</td>\n",
       "      <td>0</td>\n",
       "      <td>75</td>\n",
       "      <td>64</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10164</th>\n",
       "      <td>122</td>\n",
       "      <td>79</td>\n",
       "      <td>0</td>\n",
       "      <td>103</td>\n",
       "      <td>68</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10173</th>\n",
       "      <td>101</td>\n",
       "      <td>66</td>\n",
       "      <td>1</td>\n",
       "      <td>112</td>\n",
       "      <td>81</td>\n",
       "      <td>6</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10178</th>\n",
       "      <td>519</td>\n",
       "      <td>165</td>\n",
       "      <td>5</td>\n",
       "      <td>354</td>\n",
       "      <td>109</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10182</th>\n",
       "      <td>90</td>\n",
       "      <td>49</td>\n",
       "      <td>0</td>\n",
       "      <td>77</td>\n",
       "      <td>48</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10197</th>\n",
       "      <td>416</td>\n",
       "      <td>136</td>\n",
       "      <td>1</td>\n",
       "      <td>244</td>\n",
       "      <td>98</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10215</th>\n",
       "      <td>201</td>\n",
       "      <td>90</td>\n",
       "      <td>5</td>\n",
       "      <td>128</td>\n",
       "      <td>55</td>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10369</th>\n",
       "      <td>948</td>\n",
       "      <td>725</td>\n",
       "      <td>36</td>\n",
       "      <td>996</td>\n",
       "      <td>758</td>\n",
       "      <td>31</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10406</th>\n",
       "      <td>119</td>\n",
       "      <td>89</td>\n",
       "      <td>2</td>\n",
       "      <td>81</td>\n",
       "      <td>61</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10448</th>\n",
       "      <td>69</td>\n",
       "      <td>54</td>\n",
       "      <td>0</td>\n",
       "      <td>70</td>\n",
       "      <td>50</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1892</th>\n",
       "      <td>91</td>\n",
       "      <td>63</td>\n",
       "      <td>1</td>\n",
       "      <td>61</td>\n",
       "      <td>43</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1893</th>\n",
       "      <td>95</td>\n",
       "      <td>68</td>\n",
       "      <td>0</td>\n",
       "      <td>105</td>\n",
       "      <td>73</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1895</th>\n",
       "      <td>82</td>\n",
       "      <td>61</td>\n",
       "      <td>3</td>\n",
       "      <td>74</td>\n",
       "      <td>67</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1976</th>\n",
       "      <td>91</td>\n",
       "      <td>71</td>\n",
       "      <td>1</td>\n",
       "      <td>96</td>\n",
       "      <td>63</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>855</th>\n",
       "      <td>79</td>\n",
       "      <td>64</td>\n",
       "      <td>4</td>\n",
       "      <td>104</td>\n",
       "      <td>79</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "       de novo ADHD  rare ADHD  delet ADHD  de novo NV  rare NV  delet NV\n",
       "10033            69         59           1          99       83         1\n",
       "10090            89         59           2          63       44         0\n",
       "10094            80         59           2          55       40         0\n",
       "10128            54         44           2          52       42         0\n",
       "10131            48         32           2          65       49         0\n",
       "10153            70         57           0          75       64         1\n",
       "10164           122         79           0         103       68         3\n",
       "10173           101         66           1         112       81         6\n",
       "10178           519        165           5         354      109         2\n",
       "10182            90         49           0          77       48         1\n",
       "10197           416        136           1         244       98         0\n",
       "10215           201         90           5         128       55         4\n",
       "10369           948        725          36         996      758        31\n",
       "10406           119         89           2          81       61         1\n",
       "10448            69         54           0          70       50         0\n",
       "1892             91         63           1          61       43         2\n",
       "1893             95         68           0         105       73         2\n",
       "1895             82         61           3          74       67         0\n",
       "1976             91         71           1          96       63         2\n",
       "855              79         64           4         104       79         2"
      ]
     },
     "execution_count": 157,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "fid = open('/data/NCR_SBRB/simplex/famids.txt', 'r')\n",
    "fams = [t.rstrip() for t in fid]\n",
    "fid.close()\n",
    "\n",
    "rows = []\n",
    "data = []\n",
    "for fam in fams:\n",
    "    if fam != '10042':\n",
    "        df = pd.read_table('/data/NCR_SBRB/simplex/gatk_refine/%s_trio1_hiConfDeNovo.avinput.hg19_multianno.txt' % fam)\n",
    "        pred_cols = [i for i, col in enumerate(df.columns) if col.find('_pred') > 0]\n",
    "\n",
    "        mask = df[pred_cols] == 'D'\n",
    "        del_idx = mask.any(axis=1)\n",
    "\n",
    "        df.loc[df['PopFreqMax']=='.', 'PopFreqMax'] = np.nan\n",
    "        rare_idx = (df['PopFreqMax'].astype(float) < .01) | pd.isnull(df['PopFreqMax'])\n",
    "\n",
    "        keep_me = del_idx & rare_idx\n",
    "        row = [df.shape[0],\n",
    "               np.sum((df['PopFreqMax'].astype(float) < .01)) + np.sum(pd.isnull(df['PopFreqMax'])),\n",
    "               np.sum(keep_me)]\n",
    "        \n",
    "        df = pd.read_table('/data/NCR_SBRB/simplex/gatk_refine/%s_trio2_hiConfDeNovo.avinput.hg19_multianno.txt' % fam)\n",
    "        pred_cols = [i for i, col in enumerate(df.columns) if col.find('_pred') > 0]\n",
    "\n",
    "        mask = df[pred_cols] == 'D'\n",
    "        del_idx = mask.any(axis=1)\n",
    "\n",
    "        df.loc[df['PopFreqMax']=='.', 'PopFreqMax'] = np.nan\n",
    "        rare_idx = (df['PopFreqMax'].astype(float) < .01) | pd.isnull(df['PopFreqMax'])\n",
    "\n",
    "        keep_me = del_idx & rare_idx\n",
    "        row += [df.shape[0],\n",
    "               np.sum((df['PopFreqMax'].astype(float) < .01)) + np.sum(pd.isnull(df['PopFreqMax'])),\n",
    "               np.sum(keep_me)]\n",
    "        \n",
    "        data.append(row)\n",
    "        rows.append(fam)\n",
    "df = pd.DataFrame(data, index=rows, columns=['de novo ADHD', 'rare ADHD',\n",
    "                                             'delet ADHD', 'de novo NV',\n",
    "                                             'rare NV', 'delet NV'])\n",
    "df.to_csv('/home/sudregp/data/tmp/rare_delet_counts.csv')\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's summarize the results above per family, where the first one is always the affected trio:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['10033', '1', '1']\n",
      "['10042', '5']\n",
      "['10090', '2', '0']\n",
      "['10094', '2', '0']\n",
      "['10128', '2', '0']\n",
      "['10131', '2', '0', '0', '6']\n",
      "['10153', '0', '1', '1']\n",
      "['10164', '0', '3']\n",
      "['10173', '1', '6']\n",
      "['10178', '5', '2']\n",
      "['10182', '0', '1', '1']\n",
      "['10197', '1', '0']\n",
      "['10215', '5', '4', '1', '2']\n",
      "['10369', '36', '31']\n",
      "['10406', '2', '1', '1']\n",
      "['10448', '0', '0']\n",
      "['1892', '1', '2']\n",
      "['1893', '0', '2']\n",
      "['1895', '3', '0']\n",
      "['1976', '1', '2', '2']\n",
      "['855', '4', '2']\n"
     ]
    }
   ],
   "source": [
    "fid = open('/data/NCR_SBRB/simplex/famids.txt', 'r')\n",
    "fams = [t.rstrip() for t in fid]\n",
    "fid.close()\n",
    "\n",
    "for fam in fams:\n",
    "    keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "    keys.sort()\n",
    "    f_str = [fam] + [str(var_count[k]) for k in keys]\n",
    "    print f_str"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "8 families where unaffected has more, 2 ties, 10 where affected has more..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's run some stats. First, assuming all pairs are unrelated, then just picking the best pair. We do this parametric and non-parametric t-tests:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Only one pair per family:\n",
      "Wilcoxon p = 0.24\n",
      "Paired t-test p = 0.35\n",
      "All pairs:\n",
      "Wilcoxon p = 0.21\n",
      "Paired t-test p = 0.31\n"
     ]
    }
   ],
   "source": [
    "print 'Only one pair per family:'\n",
    "x, y = [], []\n",
    "for fam in fams:\n",
    "    if fam != '10042':\n",
    "        keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "        keys.sort()\n",
    "        x.append(var_count[keys[0]])\n",
    "        y.append(var_count[keys[1]])\n",
    "stat, pval = stats.wilcoxon(x, y)\n",
    "print 'Wilcoxon p = %.2f' % pval\n",
    "stat, pval = stats.ttest_rel(x, y)\n",
    "print 'Paired t-test p = %.2f' % pval\n",
    "\n",
    "print 'All pairs:'\n",
    "x, y = [], []\n",
    "for fam in fams:\n",
    "    if fam != '10042':\n",
    "        keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "        keys.sort()\n",
    "        for k in range(1, len(keys)):\n",
    "            x.append(var_count[keys[0]])\n",
    "            y.append(var_count[keys[k]])\n",
    "stat, pval = stats.wilcoxon(x, y)\n",
    "print 'Wilcoxon p = %.2f' % pval\n",
    "stat, pval = stats.ttest_rel(x, y)\n",
    "print 'Paired t-test p = %.2f' % pval"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# GATK all"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "\n",
    "module load annovar\n",
    "\n",
    "cd /data/NCR_SBRB/simplex/gatk_refine\n",
    "suffix=allDeNovo\n",
    "\n",
    "while read trio; do\n",
    "    convert2annovar.pl -format vcf4old ${trio}_${suffix}.vcf > ${trio}_${suffix}.avinput;\n",
    "    table_annovar.pl ${trio}_${suffix}.avinput $ANNOVAR_DATA/hg19 \\\n",
    "        -protocol refGene,dbnsfp30a,popfreq_max_20150413 -operation g,f,f \\\n",
    "        -build hg19 -nastring .\n",
    "done < /data/NCR_SBRB/simplex/trio_ids.txt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 123,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['10033', '3', '5']\n",
      "['10042', '16']\n",
      "['10090', '8', '1']\n",
      "['10094', '4', '6']\n",
      "['10128', '3', '0']\n",
      "['10131', '2', '2', '5', '10']\n",
      "['10153', '5', '5', '6']\n",
      "['10164', '10', '9']\n",
      "['10173', '6', '12']\n",
      "['10178', '11', '2']\n",
      "['10182', '10', '4', '4']\n",
      "['10197', '6', '5']\n",
      "['10215', '12', '10', '7', '6']\n",
      "['10369', '87', '83']\n",
      "['10406', '7', '4', '5']\n",
      "['10448', '3', '7']\n",
      "['1892', '5', '11']\n",
      "['1893', '1', '7']\n",
      "['1895', '7', '4']\n",
      "['1976', '7', '3', '4']\n",
      "['855', '6', '3']\n"
     ]
    }
   ],
   "source": [
    "fid = open('/data/NCR_SBRB/simplex/trio_ids.txt', 'r')\n",
    "trios = [t.rstrip() for t in fid]\n",
    "fid.close()\n",
    "\n",
    "var_count = {}\n",
    "for trio in trios:\n",
    "    df = pd.read_table('/data/NCR_SBRB/simplex/gatk_refine/%s_allDeNovo.avinput.hg19_multianno.txt' % trio)\n",
    "    pred_cols = [i for i, col in enumerate(df.columns) if col.find('_pred') > 0]\n",
    "\n",
    "    mask = df[pred_cols] == 'D'\n",
    "    del_idx = mask.any(axis=1)\n",
    "\n",
    "    df.loc[df['PopFreqMax']=='.', 'PopFreqMax'] = np.nan\n",
    "    rare_idx = (df['PopFreqMax'].astype(float) < .01) | pd.isnull(df['PopFreqMax'])\n",
    "\n",
    "    keep_me = del_idx & rare_idx\n",
    "    var_count[trio] = np.sum(keep_me)\n",
    "\n",
    "fid = open('/data/NCR_SBRB/simplex/famids.txt', 'r')\n",
    "fams = [t.rstrip() for t in fid]\n",
    "fid.close()\n",
    "\n",
    "for fam in fams:\n",
    "    keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "    keys.sort()\n",
    "    f_str = [fam] + [str(var_count[k]) for k in keys]\n",
    "    print f_str"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "8 families where unaffected has more, 12 where affected has more..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 124,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Only one pair per family:\n",
      "Wilcoxon p = 0.31\n",
      "Paired t-test p = 0.31\n",
      "All pairs:\n",
      "Wilcoxon p = 0.20\n",
      "Paired t-test p = 0.21\n"
     ]
    }
   ],
   "source": [
    "print 'Only one pair per family:'\n",
    "x, y = [], []\n",
    "for fam in fams:\n",
    "    if fam != '10042':\n",
    "        keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "        keys.sort()\n",
    "        x.append(var_count[keys[0]])\n",
    "        y.append(var_count[keys[1]])\n",
    "stat, pval = stats.wilcoxon(x, y)\n",
    "print 'Wilcoxon p = %.2f' % pval\n",
    "stat, pval = stats.ttest_rel(x, y)\n",
    "print 'Paired t-test p = %.2f' % pval\n",
    "\n",
    "print 'All pairs:'\n",
    "x, y = [], []\n",
    "for fam in fams:\n",
    "    if fam != '10042':\n",
    "        keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "        keys.sort()\n",
    "        for k in range(1, len(keys)):\n",
    "            x.append(var_count[keys[0]])\n",
    "            y.append(var_count[keys[k]])\n",
    "stat, pval = stats.wilcoxon(x, y)\n",
    "print 'Wilcoxon p = %.2f' % pval\n",
    "stat, pval = stats.ttest_rel(x, y)\n",
    "print 'Paired t-test p = %.2f' % pval"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Triodenovo"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "\n",
    "module load annovar\n",
    "\n",
    "cd /data/NCR_SBRB/simplex/triodenovo\n",
    "suffix=denovo_v2\n",
    "\n",
    "while read trio; do\n",
    "    convert2annovar.pl -format vcf4old ${trio}_${suffix}.vcf > ${trio}_${suffix}.avinput;\n",
    "    table_annovar.pl ${trio}_${suffix}.avinput $ANNOVAR_DATA/hg19 \\\n",
    "        -protocol refGene,dbnsfp30a,popfreq_max_20150413 -operation g,f,f \\\n",
    "        -build hg19 -nastring .\n",
    "done < /data/NCR_SBRB/simplex/trio_ids.txt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 125,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['10033', '16', '33']\n",
      "['10042', '20']\n",
      "['10090', '38', '24']\n",
      "['10094', '8', '10']\n",
      "['10128', '17', '20']\n",
      "['10131', '18', '13', '7', '32']\n",
      "['10153', '21', '13', '15']\n",
      "['10164', '25', '24']\n",
      "['10173', '18', '18']\n",
      "['10178', '26', '23']\n",
      "['10182', '18', '17', '19']\n",
      "['10197', '7', '13']\n",
      "['10215', '19', '24', '9', '11']\n",
      "['10369', '17', '9']\n",
      "['10406', '16', '23', '23']\n",
      "['10448', '12', '13']\n",
      "['1892', '10', '17']\n",
      "['1893', '33', '35']\n",
      "['1895', '33', '20']\n",
      "['1976', '29', '24', '10']\n",
      "['855', '24', '14']\n"
     ]
    }
   ],
   "source": [
    "fid = open('/data/NCR_SBRB/simplex/trio_ids.txt', 'r')\n",
    "trios = [t.rstrip() for t in fid]\n",
    "fid.close()\n",
    "\n",
    "var_count = {}\n",
    "for trio in trios:\n",
    "    df = pd.read_table('/data/NCR_SBRB/simplex/triodenovo/%s_denovo_v2.avinput.hg19_multianno.txt' % trio)\n",
    "    pred_cols = [i for i, col in enumerate(df.columns) if col.find('_pred') > 0]\n",
    "\n",
    "    mask = df[pred_cols] == 'D'\n",
    "    del_idx = mask.any(axis=1)\n",
    "\n",
    "    df.loc[df['PopFreqMax']=='.', 'PopFreqMax'] = np.nan\n",
    "    rare_idx = (df['PopFreqMax'].astype(float) < .01) | pd.isnull(df['PopFreqMax'])\n",
    "\n",
    "    keep_me = del_idx & rare_idx\n",
    "    var_count[trio] = np.sum(keep_me)\n",
    "\n",
    "fid = open('/data/NCR_SBRB/simplex/famids.txt', 'r')\n",
    "fams = [t.rstrip() for t in fid]\n",
    "fid.close()\n",
    "\n",
    "for fam in fams:\n",
    "    keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "    keys.sort()\n",
    "    f_str = [fam] + [str(var_count[k]) for k in keys]\n",
    "    print f_str"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "9 families where unaffected has more, 1 tie, 10 where affected has more..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 126,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Only one pair per family:\n",
      "Wilcoxon p = 0.59\n",
      "Paired t-test p = 0.60\n",
      "All pairs:\n",
      "Wilcoxon p = 0.25\n",
      "Paired t-test p = 0.28\n"
     ]
    }
   ],
   "source": [
    "print 'Only one pair per family:'\n",
    "x, y = [], []\n",
    "for fam in fams:\n",
    "    if fam != '10042':\n",
    "        keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "        keys.sort()\n",
    "        x.append(var_count[keys[0]])\n",
    "        y.append(var_count[keys[1]])\n",
    "stat, pval = stats.wilcoxon(x, y)\n",
    "print 'Wilcoxon p = %.2f' % pval\n",
    "stat, pval = stats.ttest_rel(x, y)\n",
    "print 'Paired t-test p = %.2f' % pval\n",
    "\n",
    "print 'All pairs:'\n",
    "x, y = [], []\n",
    "for fam in fams:\n",
    "    if fam != '10042':\n",
    "        keys = [k for k in var_count.iterkeys() if k.find(fam)==0]\n",
    "        keys.sort()\n",
    "        for k in range(1, len(keys)):\n",
    "            x.append(var_count[keys[0]])\n",
    "            y.append(var_count[keys[k]])\n",
    "stat, pval = stats.wilcoxon(x, y)\n",
    "print 'Wilcoxon p = %.2f' % pval\n",
    "stat, pval = stats.ttest_rel(x, y)\n",
    "print 'Paired t-test p = %.2f' % pval"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Whole genome databases\n",
    "\n",
    "I'm getting a lot of misses, but could it be because I'm only looking at exome databases? It would make sense, as I have WES data, but what happens if I look at WGS databases as well?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "table_annovar.pl tmp1.avinput $ANNOVAR_DATA/hg19 -protocol refGene,gerp++,cadd,dann,fathmm,eigen,gwava,dbscsnv11,spidex,clinvar_20160302,avsnp142 -operation g,f,f,f,f,f,f,f,f,f,f  -build hg19 -nastring ."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 143,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Chr</th>\n",
       "      <th>Start</th>\n",
       "      <th>End</th>\n",
       "      <th>Ref</th>\n",
       "      <th>Alt</th>\n",
       "      <th>Func.refGene</th>\n",
       "      <th>Gene.refGene</th>\n",
       "      <th>GeneDetail.refGene</th>\n",
       "      <th>ExonicFunc.refGene</th>\n",
       "      <th>AAChange.refGene</th>\n",
       "      <th>...</th>\n",
       "      <th>dbscSNV_ADA_SCORE</th>\n",
       "      <th>dbscSNV_RF_SCORE</th>\n",
       "      <th>dpsi_max_tissue</th>\n",
       "      <th>dpsi_zscore</th>\n",
       "      <th>CLINSIG</th>\n",
       "      <th>CLNDBN</th>\n",
       "      <th>CLNACC</th>\n",
       "      <th>CLNDSDB</th>\n",
       "      <th>CLNDSDBID</th>\n",
       "      <th>avsnp142</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>chr1</td>\n",
       "      <td>16954434</td>\n",
       "      <td>16954434</td>\n",
       "      <td>C</td>\n",
       "      <td>A</td>\n",
       "      <td>ncRNA_intronic</td>\n",
       "      <td>CROCCP2</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>...</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>rs186864069</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>chr1</td>\n",
       "      <td>92065492</td>\n",
       "      <td>92065492</td>\n",
       "      <td>T</td>\n",
       "      <td>C</td>\n",
       "      <td>intergenic</td>\n",
       "      <td>CDC7;TGFBR3</td>\n",
       "      <td>dist=74171;dist=80408</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>...</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>rs184420530</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>chr1</td>\n",
       "      <td>142803199</td>\n",
       "      <td>142803199</td>\n",
       "      <td>G</td>\n",
       "      <td>C</td>\n",
       "      <td>intergenic</td>\n",
       "      <td>ANKRD20A12P;LOC102723769</td>\n",
       "      <td>dist=89594;dist=331404</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>...</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>chr1</td>\n",
       "      <td>142803606</td>\n",
       "      <td>142803606</td>\n",
       "      <td>-</td>\n",
       "      <td>ATTAATTAATTAATTAAT</td>\n",
       "      <td>intergenic</td>\n",
       "      <td>ANKRD20A12P;LOC102723769</td>\n",
       "      <td>dist=90001;dist=330997</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>...</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>chr1</td>\n",
       "      <td>142810277</td>\n",
       "      <td>142810277</td>\n",
       "      <td>G</td>\n",
       "      <td>C</td>\n",
       "      <td>intergenic</td>\n",
       "      <td>ANKRD20A12P;LOC102723769</td>\n",
       "      <td>dist=96672;dist=324326</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>...</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "      <td>.</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 30 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "    Chr      Start        End Ref                 Alt    Func.refGene  \\\n",
       "0  chr1   16954434   16954434   C                   A  ncRNA_intronic   \n",
       "1  chr1   92065492   92065492   T                   C      intergenic   \n",
       "2  chr1  142803199  142803199   G                   C      intergenic   \n",
       "3  chr1  142803606  142803606   -  ATTAATTAATTAATTAAT      intergenic   \n",
       "4  chr1  142810277  142810277   G                   C      intergenic   \n",
       "\n",
       "               Gene.refGene      GeneDetail.refGene ExonicFunc.refGene  \\\n",
       "0                   CROCCP2                       .                  .   \n",
       "1               CDC7;TGFBR3   dist=74171;dist=80408                  .   \n",
       "2  ANKRD20A12P;LOC102723769  dist=89594;dist=331404                  .   \n",
       "3  ANKRD20A12P;LOC102723769  dist=90001;dist=330997                  .   \n",
       "4  ANKRD20A12P;LOC102723769  dist=96672;dist=324326                  .   \n",
       "\n",
       "  AAChange.refGene     ...      dbscSNV_ADA_SCORE dbscSNV_RF_SCORE  \\\n",
       "0                .     ...                      .                .   \n",
       "1                .     ...                      .                .   \n",
       "2                .     ...                      .                .   \n",
       "3                .     ...                      .                .   \n",
       "4                .     ...                      .                .   \n",
       "\n",
       "  dpsi_max_tissue dpsi_zscore CLINSIG CLNDBN CLNACC CLNDSDB CLNDSDBID  \\\n",
       "0               .           .       .      .      .       .         .   \n",
       "1               .           .       .      .      .       .         .   \n",
       "2               .           .       .      .      .       .         .   \n",
       "3               .           .       .      .      .       .         .   \n",
       "4               .           .       .      .      .       .         .   \n",
       "\n",
       "      avsnp142  \n",
       "0  rs186864069  \n",
       "1  rs184420530  \n",
       "2            .  \n",
       "3            .  \n",
       "4            .  \n",
       "\n",
       "[5 rows x 30 columns]"
      ]
     },
     "execution_count": 143,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = pd.read_table('/data/NCR_SBRB/simplex/gatk_refine/tmp1.avinput.hg19_multianno.txt')\n",
    "df[:5].head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, the issue is that most of these annotations don't provide predictions, so I'd need to somehow aggregate all these variables per trio (maybe one per variable), and do some sort of t-test using these variables. Similar to what we did before, but now we wouldn't be using counting of Ds, but the average/median value in the annotation... to be continued."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:py2.7.10]",
   "language": "python",
   "name": "conda-env-py2.7.10-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.10"
  },
  "nav_menu": {},
  "toc": {
   "navigate_menu": true,
   "number_sections": true,
   "sideBar": true,
   "threshold": 6,
   "toc_cell": false,
   "toc_section_display": "block",
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
