{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Extracting the water level"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Downloading the ICESat-2 Data on Tonle Sap Lake"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook is created by Myung Sik Cho. It modified the notebook used in 2020 ICESat 2 Hackweek by Jessica Sheick and Amy Steiker."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Import the package: icepyx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from icepyx import icesat2data as ipd\n",
    "\n",
    "import os\n",
    "import shutil\n",
    "from pprint import pprint\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are several key steps for accessing data from the NSIDC API:\n",
    "   1. Define your parameters (spatial, temporal, dataset, etc.)\n",
    "   2. Query the NSIDC API to find out more information about the dataset\n",
    "   3. Log in to NASA Earthdata\n",
    "   4. Define additional parameters (e.g. subsetting/customization options)\n",
    "   5. Order your data\n",
    "   6. Download your data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Fast donwload"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#### Tonlesap Lake (TSL)\n",
    "atl_dt = 'ATL13'\n",
    "sp_ex = [103.643, 12.375, 104.667, 13.287]\n",
    "date_r = ['2018-06-01','2020-06-29']\n",
    "\n",
    "tsl = ipd.Icesat2Data(atl_dt,sp_ex,date_r)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## account info\n",
    "earthdata_uid = 'choms516'\n",
    "email = 'whaudtlr516@gmail.com'\n",
    "\n",
    "## log-in\n",
    "tsl.earthdata_login(earthdata_uid,email)\n",
    "## Then, we can type the password"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": true,
    "jupyter": {
     "outputs_hidden": true
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Total number of data order requests is  7  for  63  granules.\n",
      "Data request  1  of  7  is submitting to NSIDC\n",
      "order ID:  5000000706122\n",
      "Initial status of your order request at NSIDC is:  processing\n",
      "Your order status is still  processing  at NSIDC. Please continue waiting... this may take a few moments.\n",
      "Your order is:  complete_with_errors\n",
      "NSIDC provided these error messages:\n",
      "['177190111:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177189775:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177189205:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177189199:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " 'PT3.112S',\n",
      " 'ICESAT2']\n",
      "Your order is: complete_with_errors\n",
      "Data request  2  of  7  is submitting to NSIDC\n",
      "order ID:  5000000706123\n",
      "Initial status of your order request at NSIDC is:  processing\n",
      "Your order status is still  processing  at NSIDC. Please continue waiting... this may take a few moments.\n",
      "Your order is:  complete_with_errors\n",
      "NSIDC provided these error messages:\n",
      "['177490022:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177493296:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177691962:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177714420:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " 'PT4.195S',\n",
      " 'ICESAT2']\n",
      "Your order is: complete_with_errors\n",
      "Data request  3  of  7  is submitting to NSIDC\n",
      "order ID:  5000000706124\n",
      "Initial status of your order request at NSIDC is:  processing\n",
      "Your order status is still  processing  at NSIDC. Please continue waiting... this may take a few moments.\n",
      "Your order is:  complete_with_errors\n",
      "NSIDC provided these error messages:\n",
      "['177731284:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177731269:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177733988:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177808661:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " 'PT2.739S',\n",
      " 'ICESAT2']\n",
      "Your order is: complete_with_errors\n",
      "Data request  4  of  7  is submitting to NSIDC\n",
      "order ID:  5000000706125\n",
      "Initial status of your order request at NSIDC is:  processing\n",
      "Your order status is still  processing  at NSIDC. Please continue waiting... this may take a few moments.\n",
      "Your order is:  complete_with_errors\n",
      "NSIDC provided these error messages:\n",
      "['177895244:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177730728:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177895373:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177730856:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177895641:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " 'PT2.513S',\n",
      " 'ICESAT2']\n",
      "Your order is: complete_with_errors\n",
      "Data request  5  of  7  is submitting to NSIDC\n",
      "order ID:  5000000706126\n",
      "Initial status of your order request at NSIDC is:  processing\n",
      "Your order status is still  processing  at NSIDC. Please continue waiting... this may take a few moments.\n",
      "Your order is:  complete_with_errors\n",
      "NSIDC provided these error messages:\n",
      "['177956068:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177956396:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '177956333:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '179096730:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '179206994:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " 'PT2.642S',\n",
      " 'ICESAT2']\n",
      "Your order is: complete_with_errors\n",
      "Data request  6  of  7  is submitting to NSIDC\n",
      "order ID:  5000000706127\n",
      "Initial status of your order request at NSIDC is:  processing\n",
      "Your order status is still  processing  at NSIDC. Please continue waiting... this may take a few moments.\n",
      "Your order is:  complete_with_errors\n",
      "NSIDC provided these error messages:\n",
      "['179097233:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " '179220756:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " 'PT3.554S',\n",
      " 'ICESAT2']\n",
      "Your order is: complete_with_errors\n",
      "Data request  7  of  7  is submitting to NSIDC\n",
      "order ID:  5000000706128\n",
      "Initial status of your order request at NSIDC is:  processing\n",
      "Your order status is still  processing  at NSIDC. Please continue waiting... this may take a few moments.\n",
      "Your order is:  complete_with_errors\n",
      "NSIDC provided these error messages:\n",
      "['180696698:NoMatchingData - No data found that matched subset constraints. '\n",
      " 'Exit code 3.',\n",
      " 'PT2.898S',\n",
      " 'ICESAT2']\n",
      "Your order is: complete_with_errors\n"
     ]
    }
   ],
   "source": [
    "tsl.order_vars.append(var_list = ['ht_water_surf','segment_lat','segment_lon','ht_ortho',\n",
    "                                  'stdev_water_surf','water_depth','err_ht_water_surf'])\n",
    "\n",
    "tsl.subsetparams(Coverage=tsl.order_vars.wanted)\n",
    "\n",
    "tsl.order_granules(Coverage = tsl.order_vars.wanted,email=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Beginning download of zipped output...\n",
      "Data request 5000000706106 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706107 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706108 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706109 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706110 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706111 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706112 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706113 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706114 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706115 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706116 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706117 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706118 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706119 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706122 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706123 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706124 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706125 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706126 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706127 of  21  order(s) is downloaded.\n",
      "Beginning download of zipped output...\n",
      "Data request 5000000706128 of  21  order(s) is downloaded.\n",
      "Download complete\n"
     ]
    }
   ],
   "source": [
    "path = './download2'\n",
    "\n",
    "tsl.download_granules(path,Coverage=tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Help on method download_granules in module icepyx.core.icesat2data:\n",
      "\n",
      "download_granules(path, verbose=False, subset=True, restart=False, **kwargs) method of icepyx.core.icesat2data.Icesat2Data instance\n",
      "    Downloads the data ordered using order_granules.\n",
      "    \n",
      "    Parameters\n",
      "    ----------\n",
      "    path : string\n",
      "        String with complete path to desired download location.\n",
      "    verbose : boolean, default False\n",
      "        Print out all feedback available from the order process.\n",
      "        Progress information is automatically printed regardless of the value of verbose.\n",
      "    subset : boolean, default True\n",
      "        Apply subsetting to the data order from the NSIDC, returning only data that meets the\n",
      "        subset parameters. Spatial and temporal subsetting based on the input parameters happens\n",
      "        by default when subset=True, but additional subsetting options are available.\n",
      "        Spatial subsetting returns all data that are within the area of interest (but not complete\n",
      "        granules. This eliminates false-positive granules returned by the metadata-level search)\n",
      "    restart: boolean, default false\n",
      "        If previous download was terminated unexpectedly. Run again with restart set to True to continue. \n",
      "    **kwargs : key-value pairs\n",
      "        Additional parameters to be passed to the subsetter.\n",
      "        By default temporal and spatial subset keys are passed.\n",
      "        Acceptable key values are ['format','projection','projection_parameters','Coverage'].\n",
      "        The variable 'Coverage' list should be constructed using the `order_vars.wanted` attribute of the object.\n",
      "        At this time (2020-05), only variable ('Coverage') parameters will be automatically formatted.\n",
      "    \n",
      "    See Also\n",
      "    --------\n",
      "    granules.download\n",
      "\n"
     ]
    }
   ],
   "source": [
    "help(tsl.download_granules)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\u001b[0;31mSignature:\u001b[0m \u001b[0mtsl\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0morder_granules\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msubset\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0memail\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
       "\u001b[0;31mSource:\u001b[0m   \n",
       "    \u001b[0;32mdef\u001b[0m \u001b[0morder_granules\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msubset\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0memail\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;34m\"\"\"\u001b[0m\n",
       "\u001b[0;34m        Place an order for the available granules for the icesat2data object.\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m        Parameters\u001b[0m\n",
       "\u001b[0;34m        ----------\u001b[0m\n",
       "\u001b[0;34m        verbose : boolean, default False\u001b[0m\n",
       "\u001b[0;34m            Print out all feedback available from the order process.\u001b[0m\n",
       "\u001b[0;34m            Progress information is automatically printed regardless of the value of verbose.\u001b[0m\n",
       "\u001b[0;34m        subset : boolean, default True\u001b[0m\n",
       "\u001b[0;34m            Apply subsetting to the data order from the NSIDC, returning only data that meets the\u001b[0m\n",
       "\u001b[0;34m            subset parameters. Spatial and temporal subsetting based on the input parameters happens\u001b[0m\n",
       "\u001b[0;34m            by default when subset=True, but additional subsetting options are available.\u001b[0m\n",
       "\u001b[0;34m            Spatial subsetting returns all data that are within the area of interest (but not complete\u001b[0m\n",
       "\u001b[0;34m            granules. This eliminates false-positive granules returned by the metadata-level search)\u001b[0m\n",
       "\u001b[0;34m        email: boolean, default True\u001b[0m\n",
       "\u001b[0;34m            Have NSIDC auto-send order status email updates to indicate order status as pending/completed.\u001b[0m\n",
       "\u001b[0;34m        **kwargs : key-value pairs\u001b[0m\n",
       "\u001b[0;34m            Additional parameters to be passed to the subsetter.\u001b[0m\n",
       "\u001b[0;34m            By default temporal and spatial subset keys are passed.\u001b[0m\n",
       "\u001b[0;34m            Acceptable key values are ['format','projection','projection_parameters','Coverage'].\u001b[0m\n",
       "\u001b[0;34m            The variable 'Coverage' list should be constructed using the `order_vars.wanted` attribute of the object.\u001b[0m\n",
       "\u001b[0;34m            At this time (2020-05), only variable ('Coverage') parameters will be automatically formatted.\u001b[0m\n",
       "\u001b[0;34m        \u001b[0m\n",
       "\u001b[0;34m        See Also\u001b[0m\n",
       "\u001b[0;34m        --------\u001b[0m\n",
       "\u001b[0;34m        granules.place_order\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m        Examples\u001b[0m\n",
       "\u001b[0;34m        --------\u001b[0m\n",
       "\u001b[0;34m        >>> reg_a = icepyx.icesat2data.Icesat2Data('ATL06',[-55, 68, -48, 71],['2019-02-20','2019-02-28'])\u001b[0m\n",
       "\u001b[0;34m        >>> reg_a.earthdata_login(user_id,user_email)\u001b[0m\n",
       "\u001b[0;34m        Earthdata Login password:  ········        \u001b[0m\n",
       "\u001b[0;34m        >>> reg_a.order_granules()\u001b[0m\n",
       "\u001b[0;34m        order ID: [###############]\u001b[0m\n",
       "\u001b[0;34m        [order status output]\u001b[0m\n",
       "\u001b[0;34m        error messages:\u001b[0m\n",
       "\u001b[0;34m        [if any were returned from the NSIDC subsetter, e.g. No data found that matched subset constraints.]\u001b[0m\n",
       "\u001b[0;34m        .\u001b[0m\n",
       "\u001b[0;34m        .\u001b[0m\n",
       "\u001b[0;34m        .\u001b[0m\n",
       "\u001b[0;34m        Retry request status is: complete\u001b[0m\n",
       "\u001b[0;34m        \"\"\"\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mhasattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'reqparams'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreqparams\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqparams\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqtype\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m'search'\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m            \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqparams\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqtype\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'download'\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32mif\u001b[0m \u001b[0;34m'email'\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqparams\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfmted_keys\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeys\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0memail\u001b[0m\u001b[0;34m==\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m            \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqparams\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbuild_params\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqparams\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfmted_keys\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m            \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqparams\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbuild_params\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reqparams\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfmted_keys\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0memail\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_email\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32mif\u001b[0m \u001b[0msubset\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mFalse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m            \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_subsetparams\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32melif\u001b[0m \u001b[0msubset\u001b[0m\u001b[0;34m==\u001b[0m\u001b[0;32mTrue\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mhasattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'_subsetparams'\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_subsetparams\u001b[0m\u001b[0;34m==\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m            \u001b[0;32mdel\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_subsetparams\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m     \u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;31m#REFACTOR: add checks here to see if the granules object has been created, and also if it already has a list of avail granules (if not, need to create one and add session)\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mhasattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'_granules'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgranules\u001b[0m\u001b[0;34m\u001b[0m\n",
       "\u001b[0;34m\u001b[0m        \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_granules\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mplace_order\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mCMRparams\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreqparams\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msubsetparams\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mverbose\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msubset\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msession\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_session\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mgeom_filepath\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_geom_filepath\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
       "\u001b[0;31mFile:\u001b[0m      /srv/conda/envs/notebook/lib/python3.7/site-packages/icepyx/core/icesat2data.py\n",
       "\u001b[0;31mType:\u001b[0m      method\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "tsl.order_granules??\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 1: Define the parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "ipd.Icesat2DataCheck will be used. Check the parameters!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Required parameters are:\n",
    "1. **dataset**: The version of ATL (e.g., ATL13 - inland water)\n",
    "2. **spatial_extent**= [lower-left-longitude, lower-left-latitute, upper-right-longitude, upper-right-latitude] (e.g., [103.643, 12.375, 104.667, 13.287])\n",
    "3. **date_range** = e.g., [2018-06-01,2020-06-21]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "#### Tonlesap Lake (TSL)\n",
    "atl_dt = 'ATL13'\n",
    "sp_ex = [103.643, 12.375, 104.667, 13.287]\n",
    "date_r = ['2018-06-01','2020-06-21']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create the data obejct, and then map it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl = ipd.Icesat2Data(atl_dt,sp_ex,date_r)\n",
    "\n",
    "### test whether it work well\n",
    "print(tsl.dates)\n",
    "\n",
    "### visulaize them\n",
    "tsl.visualize_spatial_extent()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's make an interative map"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from ipyleaflet import Map, Marker, basemaps, Rectangle"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "center = ((sp_ex[1]+sp_ex[3])/2,(sp_ex[0]+sp_ex[2])/2)\n",
    "basemap = basemaps.Stamen.Terrain\n",
    "m = Map(center=center, zoom=8, basemap=basemap)\n",
    "#label=rgi_gdf_conus_aea.loc[label]['Name']\n",
    "marker = Marker(location=center, draggable=True)\n",
    "rectangle = Rectangle(bounds=((sp_ex[1], sp_ex[0]), (sp_ex[3], sp_ex[2])))\n",
    "m.add_layer(rectangle);\n",
    "display(m)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check the information on our dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.dataset_summary_info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 2: Querying a dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`region_a.avail_granules()` or `region_a.CMRparams` is using for building the search parameters for searching available dataset collection. Its formate is a dictonary (i.e., key:value).\n",
    "\n",
    "* CMR = Common Metadata Repository\n",
    "* Granules: essential data files with specific bounds"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## CMR parameters\n",
    "tsl.CMRparams\n",
    "\n",
    "## search for available granules and their basic summary info\n",
    "tsl.avail_granules()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Get a list of the available granule IDs\n",
    "tsl.avail_granules(ids=True)\n",
    "\n",
    "## detailed information\n",
    "#tsl.granules.avail"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Finding the available ICESat-2 via Openaltimetry. It is more intuitive way"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can find the history of ICESat-2 using https://openaltimetry.org/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 3: Log in to NASA Earthdata"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## account info\n",
    "earthdata_uid = 'choms516'\n",
    "email = 'whaudtlr516@gmail.com'\n",
    "\n",
    "## log-in\n",
    "tsl.earthdata_login(earthdata_uid,email)\n",
    "## Then, we can type the password\n",
    "\n",
    "tsl.subsetparams(Coverage=tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 4: Additional parameters and subsetting"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Configuration parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once we have generated our session, we must define the required configuration parameters for downloading data. These information is already built after running `tsl.avail_granules()` (the detailed information is ran through `tsl.reqparams`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(tsl.reqparams)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Subsetting"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It is a process for cropping the ROI and other variables, such as temporal."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.subsetparams()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## See subsetting options\n",
    "tsl.show_custom_options(dictview=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## See the variables\n",
    "tsl.order_vars.avail(options=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I will extract these variables:\n",
    "* ht_water_surf (m): water surface height for each short segment (app. 100 signal photons)\n",
    "* segment_lat (degrees) & segment_lon\n",
    "* ht_ortho (m): orthometric height EGM2008 converted from ellipsoidal height\n",
    "* stdev_water_surf (m): derived SD of water surface, calculated over long segments with result reported at each short segment\n",
    "* water_depth (m): depth from the mean water surface to detected botton\n",
    "* err_ht_water_surf (m): precisions per 100 inland water photons"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.order_vars.append(var_list = ['ht_water_surf','segment_lat','segment_lon','ht_ortho',\n",
    "                                  'stdev_water_surf','water_depth','err_ht_water_surf'])\n",
    "## Check the orders\n",
    "pprint(tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Apply the subsetting criteria in ordering"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.subsetparams(Coverage=tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 5: Place the Data Order"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## with variable subsetting\n",
    "tsl.order_granules(Coverage = tsl.order_vars.wanted)\n",
    "## without mail notification\n",
    "#tsl.order_granules(Coverage = tsl.order_vars.wanted, email=False)\n",
    "\n",
    "## without variable\n",
    "#tsl.order_granules()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## view lists\n",
    "tsl.granules.orderIDs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 6: Download the Order"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "path = './download'\n",
    "\n",
    "tsl.download_granules(path,Coverage=tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Plot the ICESat-2 in individual level"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Check the files"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "from pathlib import Path\n",
    "import h5py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "List up the downloaded files"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_home = Path('/home/jovyan/ICESat_water_level/extraction/download/')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "38\n"
     ]
    }
   ],
   "source": [
    "## list them up and check them\n",
    "files= list(data_home.glob('*.h5'))\n",
    "print(len(files))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Look around the structure. I will select the latest one."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/                        Group\n",
      "/METADATA                Group\n",
      "/METADATA/AcquisitionInformation Group\n",
      "/METADATA/AcquisitionInformation/lidar Group\n",
      "/METADATA/AcquisitionInformation/lidarDocument Group\n",
      "/METADATA/AcquisitionInformation/platform Group\n",
      "/METADATA/AcquisitionInformation/platformDocument Group\n",
      "/METADATA/DataQuality    Group\n",
      "/METADATA/DataQuality/CompletenessOmission Group\n",
      "/METADATA/DataQuality/DomainConsistency Group\n",
      "/METADATA/DatasetIdentification Group\n",
      "/METADATA/Extent         Group\n",
      "/METADATA/Lineage        Group\n",
      "/METADATA/Lineage/ANC19  Group\n",
      "/METADATA/Lineage/ANC20-01 Group\n",
      "/METADATA/Lineage/ANC20-02 Group\n",
      "/METADATA/Lineage/ANC25-13 Group\n",
      "/METADATA/Lineage/ANC26-13 Group\n",
      "/METADATA/Lineage/ANC36-13 Group\n",
      "/METADATA/Lineage/ANC38-13 Group\n",
      "/METADATA/Lineage/ATL03  Group\n",
      "/METADATA/Lineage/ATL09  Group\n",
      "/METADATA/Lineage/Control Group\n",
      "/METADATA/ProcessStep    Group\n",
      "/METADATA/ProcessStep/Browse Group\n",
      "/METADATA/ProcessStep/Metadata Group\n",
      "/METADATA/ProcessStep/PGE Group\n",
      "/METADATA/ProcessStep/QA Group\n",
      "/METADATA/ProductSpecificationDocument Group\n",
      "/METADATA/QADatasetIdentification Group\n",
      "/METADATA/SeriesIdentification Group\n",
      "/ancillary_data          Group\n",
      "/ancillary_data/atlas_sdp_gps_epoch Dataset {1}\n",
      "/ancillary_data/data_end_utc Dataset {1}\n",
      "/ancillary_data/data_start_utc Dataset {1}\n",
      "/ancillary_data/end_delta_time Dataset {1}\n",
      "/ancillary_data/granule_end_utc Dataset {1}\n",
      "/ancillary_data/granule_start_utc Dataset {1}\n",
      "/ancillary_data/start_delta_time Dataset {1}\n",
      "/gt1l                    Group\n",
      "/gt1l/err_ht_water_surf  Dataset {472/Inf}\n",
      "/gt1l/ht_ortho           Dataset {472/Inf}\n",
      "/gt1l/ht_water_surf      Dataset {472/Inf}\n",
      "/gt1l/segment_lat        Dataset {472/Inf}\n",
      "/gt1l/segment_lon        Dataset {472/Inf}\n",
      "/gt1l/stdev_water_surf   Dataset {472/Inf}\n",
      "/gt1l/water_depth        Dataset {472/Inf}\n",
      "/gt1r                    Group\n",
      "/gt1r/err_ht_water_surf  Dataset {90/Inf}\n",
      "/gt1r/ht_ortho           Dataset {90/Inf}\n",
      "/gt1r/ht_water_surf      Dataset {90/Inf}\n",
      "/gt1r/segment_lat        Dataset {90/Inf}\n",
      "/gt1r/segment_lon        Dataset {90/Inf}\n",
      "/gt1r/stdev_water_surf   Dataset {90/Inf}\n",
      "/gt1r/water_depth        Dataset {90/Inf}\n",
      "/gt2l                    Group\n",
      "/gt2l/err_ht_water_surf  Dataset {665/Inf}\n",
      "/gt2l/ht_ortho           Dataset {665/Inf}\n",
      "/gt2l/ht_water_surf      Dataset {665/Inf}\n",
      "/gt2l/segment_lat        Dataset {665/Inf}\n",
      "/gt2l/segment_lon        Dataset {665/Inf}\n",
      "/gt2l/stdev_water_surf   Dataset {665/Inf}\n",
      "/gt2l/water_depth        Dataset {665/Inf}\n",
      "/gt2r                    Group\n",
      "/gt2r/err_ht_water_surf  Dataset {342/Inf}\n",
      "/gt2r/ht_ortho           Dataset {342/Inf}\n",
      "/gt2r/ht_water_surf      Dataset {342/Inf}\n",
      "/gt2r/segment_lat        Dataset {342/Inf}\n",
      "/gt2r/segment_lon        Dataset {342/Inf}\n",
      "/gt2r/stdev_water_surf   Dataset {342/Inf}\n",
      "/gt2r/water_depth        Dataset {342/Inf}\n",
      "/gt3l                    Group\n",
      "/gt3l/err_ht_water_surf  Dataset {598/Inf}\n",
      "/gt3l/ht_ortho           Dataset {598/Inf}\n",
      "/gt3l/ht_water_surf      Dataset {598/Inf}\n",
      "/gt3l/segment_lat        Dataset {598/Inf}\n",
      "/gt3l/segment_lon        Dataset {598/Inf}\n",
      "/gt3l/stdev_water_surf   Dataset {598/Inf}\n",
      "/gt3l/water_depth        Dataset {598/Inf}\n",
      "/gt3r                    Group\n",
      "/gt3r/err_ht_water_surf  Dataset {188/Inf}\n",
      "/gt3r/ht_ortho           Dataset {188/Inf}\n",
      "/gt3r/ht_water_surf      Dataset {188/Inf}\n",
      "/gt3r/segment_lat        Dataset {188/Inf}\n",
      "/gt3r/segment_lon        Dataset {188/Inf}\n",
      "/gt3r/stdev_water_surf   Dataset {188/Inf}\n",
      "/gt3r/water_depth        Dataset {188/Inf}\n",
      "/orbit_info              Group\n",
      "/orbit_info/sc_orient    Dataset {1/Inf}\n",
      "/orbit_info/sc_orient_time Dataset {1/Inf}\n"
     ]
    }
   ],
   "source": [
    "file_latest = files[37]\n",
    "!h5ls -r {file_latest}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Read it.\n",
    "with h5py.File(file_latest, 'r') as f:\n",
    "    lat = f['gt1l']['segment_lat'][:]\n",
    "    long = f['gt1l']['segment_lon'][:]\n",
    "    pairs = [1,2,3]\n",
    "    beams = ['l','r']\n",
    "    #print(lat)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Mapping"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "See the segments on the map."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import geopandas as gpd\n",
    "import matplotlib.pyplot as plt\n",
    "from ipyleaflet import Map, GeoData, basemaps, basemaps, basemap_to_tiles, CircleMarker, LayerGroup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "## create it to the dataframe\n",
    "test_df = pd.DataFrame({'Latitude':lat,'Longitude':long})\n",
    "## geodataframe\n",
    "#test_gdf = gpd.GeoDataFrame(test_df,geometry=gpd.points_from_xy(test_df.Longitude,"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1a6d2cc4ab8242eea3562dec6b6388c8",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map(center=[12.831, 104.155], controls=(ZoomControl(options=['position', 'zoom_in_text', 'zoom_in_title', 'zoo…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "### Mapping process\n",
    "center = ((sp_ex[1]+sp_ex[3])/2,(sp_ex[0]+sp_ex[2])/2)\n",
    "basemap = basemaps.Stamen.Terrain\n",
    "m = Map(center=center, zoom=9, basemap=basemap)\n",
    "#map_gdp = GeoData(geo_dataframe = test_gdf, icon_size=[5,5])\n",
    "\n",
    "### Ipyleaflet does not provide points, so I made a function for making circles\n",
    "def create_marker(row):\n",
    "    lat_lon = (row[\"Latitude\"], row[\"Longitude\"])\n",
    "    return CircleMarker(location=lat_lon,\n",
    "                    radius = 3,\n",
    "                    color = \"red\",\n",
    "                    fill_color = \"black\",\n",
    "                    weight=1)\n",
    "\n",
    "markers = test_df.apply(create_marker,axis=1)\n",
    "layer_group = LayerGroup(layers=tuple(markers.values))\n",
    "m.add_layer(layer_group)\n",
    "m"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ploting the photons\n",
    "I will make histogram of water level. Before it, ATL13 reader is necessary for convenient coding. It will read the file and store it in a dictionary"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import re\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib widget\n",
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def alt13_to_df(filename, beam):    \n",
    "    f = h5py.File(filename, 'r')\n",
    "    f_beam = f[beam]\n",
    "    lat = f_beam['segment_lat'][:]\n",
    "    long = f_beam['segment_lon'][:]\n",
    "    ws = f_beam['ht_water_surf'][:]\n",
    "    ws_sd = f_beam['stdev_water_surf'][:]\n",
    "    ws_err = f_beam['err_ht_water_surf'][:]\n",
    "    ortho = f_beam['ht_ortho'][:]\n",
    "    wd = f_beam['water_depth'][:]\n",
    "    alt13_df = pd.DataFrame({'Latitude':lat,'Longitude':long,'SurfaceH':ws,\n",
    "                            'SH_SD':ws_sd, 'SH_error':ws_err,'OrthoH':ortho,\n",
    "                            'WaterD':wd})\n",
    "    return alt13_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gt21 = alt13_to_df(file_latest,'gt2r')\n",
    "\n",
    "# mapping the water surface\n",
    "fig=plt.figure(figsize=(6,4))\n",
    "ax = fig.add_subplot(111)\n",
    "ax.plot(gt21['Longitude'],gt21['SurfaceH'],markersize=0.25, label='all segements')\n",
    "h_leg=ax.legend()\n",
    "plt.title('Water Surface')\n",
    "ax.set_xlabel('Longitude')\n",
    "ax.set_ylabel('Water Surface, m')\n",
    "plt.show()\n",
    "\n",
    "# mapping the orthometri heights\n",
    "fig=plt.figure(figsize=(6,4))\n",
    "ax = fig.add_subplot(111)\n",
    "ax.plot(gt21['Longitude'],gt21['OrthoH'],markersize=0.25, label='all segements')\n",
    "h_leg=ax.legend()\n",
    "plt.title('Orthometric Heights')\n",
    "ax.set_xlabel('Longitude')\n",
    "ax.set_ylabel('Orthometric Heights, m')\n",
    "plt.show()\n",
    "\n",
    "# mapping the water depth\n",
    "fig=plt.figure(figsize=(6,4))\n",
    "ax = fig.add_subplot(111)\n",
    "ax.plot(gt21['Longitude'],gt21['WaterD'],markersize=0.25, label='all segements')\n",
    "h_leg=ax.legend()\n",
    "plt.title('Water Depth')\n",
    "ax.set_xlabel('Longitude')\n",
    "ax.set_ylabel('Water Depth, m')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Through mapping three types of water level, the `water depth` is insignificant. Since the `water depth` was calculated from differences between the mean water surface to detected bottom, difficulties in detecting bottom might lead to inaccurate water depth. The Tonle Sap is large lake so the water depth might be useless, but other small water bodies (e.g., wetlands) might show accurate water depth.\n",
    "We need to figure out what `water surface height` and `orthometric height` indicate. According to the document, `water surface height` is height reported for each short segment with reference to WGS84 ellipsoid. `orthometric height` is height EGM2008 converted from ellipsoidal height."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:notebook] *",
   "language": "python",
   "name": "conda-env-notebook-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
