{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Extracting the water level"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Downloading the ICESat-2 Data on Tonle Sap Lake"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook is created by Myung Sik Cho. It modified the notebook used in 2020 ICESat 2 Hackweek by Jessica Sheick and Amy Steiker."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Import the package: icepyx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from icepyx import icesat2data as ipd\n",
    "\n",
    "import os\n",
    "import shutil\n",
    "from pprint import pprint\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are several key steps for accessing data from the NSIDC API:\n",
    "   1. Define your parameters (spatial, temporal, dataset, etc.)\n",
    "   2. Query the NSIDC API to find out more information about the dataset\n",
    "   3. Log in to NASA Earthdata\n",
    "   4. Define additional parameters (e.g. subsetting/customization options)\n",
    "   5. Order your data\n",
    "   6. Download your data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 1: Define the parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "ipd.Icesat2DataCheck will be used. Check the parameters!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Required parameters are:\n",
    "1. **dataset**: The version of ATL (e.g., ATL13 - inland water)\n",
    "2. **spatial_extent**= [lower-left-longitude, lower-left-latitute, upper-right-longitude, upper-right-latitude] (e.g., [103.643, 12.375, 104.667, 13.287])\n",
    "3. **date_range** = e.g., [2018-06-01,2020-06-21]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#### Tonlesap Lake (TSL)\n",
    "atl_dt = 'ATL13'\n",
    "sp_ex = [103.643, 12.375, 104.667, 13.287]\n",
    "date_r = ['2018-06-01','2020-06-21']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create the data obejct, and then map it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl = ipd.Icesat2Data(atl_dt,sp_ex,date_r)\n",
    "\n",
    "### test whether it work well\n",
    "print(tsl.dates)\n",
    "\n",
    "### visulaize them\n",
    "tsl.visualize_spatial_extent()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's make an interative map"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from ipyleaflet import Map, Marker, basemaps, Rectangle"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "center = ((sp_ex[1]+sp_ex[3])/2,(sp_ex[0]+sp_ex[2])/2)\n",
    "basemap = basemaps.Stamen.Terrain\n",
    "m = Map(center=center, zoom=8, basemap=basemap)\n",
    "#label=rgi_gdf_conus_aea.loc[label]['Name']\n",
    "marker = Marker(location=center, draggable=True)\n",
    "rectangle = Rectangle(bounds=((sp_ex[1], sp_ex[0]), (sp_ex[3], sp_ex[2])))\n",
    "m.add_layer(rectangle);\n",
    "display(m)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check the information on our dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.dataset_summary_info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 2: Querying a dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`region_a.avail_granules()` or `region_a.CMRparams` is using for building the search parameters for searching available dataset collection. Its formate is a dictonary (i.e., key:value).\n",
    "\n",
    "* CMR = Common Metadata Repository\n",
    "* Granules: essential data files with specific bounds"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## CMR parameters\n",
    "tsl.CMRparams\n",
    "\n",
    "## search for available granules and their basic summary info\n",
    "tsl.avail_granules()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Get a list of the available granule IDs\n",
    "tsl.avail_granules(ids=True)\n",
    "\n",
    "## detailed information\n",
    "#tsl.granules.avail"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Finding the available ICESat-2 via Openaltimetry. It is more intuitive way"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can find the history of ICESat-2 using https://openaltimetry.org/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 3: Log in to NASA Earthdata"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## account info\n",
    "earthdata_uid = 'choms516'\n",
    "email = 'whaudtlr516@gmail.com'\n",
    "\n",
    "## log-in\n",
    "tsl.earthdata_login(earthdata_uid,email)\n",
    "## Then, we can type the password"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 4: Additional parameters and subsetting"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Configuration parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once we have generated our session, we must define the required configuration parameters for downloading data. These information is already built after running `tsl.avail_granules()` (the detailed information is ran through `tsl.reqparams`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(tsl.reqparams)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Subsetting"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It is a process for cropping the ROI and other variables, such as temporal."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.subsetparams()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## See subsetting options\n",
    "tsl.show_custom_options(dictview=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## See the variables\n",
    "tsl.order_vars.avail(options=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I will extract these variables:\n",
    "* ht_water_surf (m): water surface height for each short segment (app. 100 signal photons)\n",
    "* segment_lat (degrees) & segment_lon\n",
    "* ht_ortho (m): orthometric height EGM2008 converted from ellipsoidal height\n",
    "* stdev_water_surf (m): derived SD of water surface, calculated over long segments with result reported at each short segment\n",
    "* water_depth (m): depth from the mean water surface to detected botton\n",
    "* err_ht_water_surf (m): precisions per 100 inland water photons"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.order_vars.append(var_list = ['ht_water_surf','segment_lat','segment_lon','ht_ortho',\n",
    "                                  'stdev_water_surf','water_depth','err_ht_water_surf'])\n",
    "## Check the orders\n",
    "pprint(tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Apply the subsetting criteria in ordering"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tsl.subsetparams(Coverage=tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 5: Place the Data Order"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## with variable subsetting\n",
    "tsl.order_granules(Coverage = tsl.order_vars.wanted)\n",
    "## without mail notification\n",
    "#tsl.order_granules(Coverage = tsl.order_vars.wanted, email=False)\n",
    "\n",
    "## without variable\n",
    "#tsl.order_granules()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## view lists\n",
    "tsl.granules.orderIDs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 6: Download the Order"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "path = './download'\n",
    "\n",
    "tsl.download_granules(path,Coverage=tsl.order_vars.wanted)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Plot the ICESat-2 in individual level"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Check the files"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "from pathlib import Path\n",
    "import h5py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "List up the downloaded files"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_home = Path('/home/jovyan/ICESat_water_level/extraction/download/')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## list them up and check them\n",
    "files= list(data_home.glob('*.h5'))\n",
    "print(len(files))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Look around the structure. I will select the latest one."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "file_latest = files[37]\n",
    "!h5ls -r {file_latest}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Read it.\n",
    "with h5py.File(file_latest, 'r') as f:\n",
    "    lat = f['gt1l']['segment_lat'][:]\n",
    "    long = f['gt1l']['segment_lon'][:]\n",
    "    pairs = [1,2,3]\n",
    "    beams = ['l','r']\n",
    "    #print(lat)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Mapping"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "See the segments on the map."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import geopandas as gpd\n",
    "import matplotlib.pyplot as plt\n",
    "from ipyleaflet import Map, GeoData, basemaps, basemaps, basemap_to_tiles, CircleMarker, LayerGroup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## create it to the dataframe\n",
    "test_df = pd.DataFrame({'Latitude':lat,'Longitude':long})\n",
    "## geodataframe\n",
    "test_gdf = gpd.GeoDataFrame(test_df,geometry=gpd.points_from_xy(test_df.Longitude,"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Mapping process\n",
    "center = ((sp_ex[1]+sp_ex[3])/2,(sp_ex[0]+sp_ex[2])/2)\n",
    "basemap = basemaps.Stamen.Terrain\n",
    "m = Map(center=center, zoom=9, basemap=basemap)\n",
    "#map_gdp = GeoData(geo_dataframe = test_gdf, icon_size=[5,5])\n",
    "\n",
    "### Ipyleaflet does not provide points, so I made a function for making circles\n",
    "def create_marker(row):\n",
    "    lat_lon = (row[\"Latitude\"], row[\"Longitude\"])\n",
    "    return CircleMarker(location=lat_lon,\n",
    "                    radius = 3,\n",
    "                    color = \"red\",\n",
    "                    fill_color = \"black\",\n",
    "                    weight=1)\n",
    "\n",
    "markers = test_df.apply(create_marker,axis=1)\n",
    "layer_group = LayerGroup(layers=tuple(markers.values))\n",
    "m.add_layer(layer_group)\n",
    "m"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ploting the photons\n",
    "I will make histogram of water level. Before it, ATL13 reader is necessary for convenient coding. It will read the file and store it in a dictionary"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import re\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib widget\n",
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def alt13_to_df(filename, beam):    \n",
    "    f = h5py.File(filename, 'r')\n",
    "    f_beam = f[beam]\n",
    "    lat = f_beam['segment_lat'][:]\n",
    "    long = f_beam['segment_lon'][:]\n",
    "    ws = f_beam['ht_water_surf'][:]\n",
    "    ws_sd = f_beam['stdev_water_surf'][:]\n",
    "    ws_err = f_beam['err_ht_water_surf'][:]\n",
    "    ortho = f_beam['ht_ortho'][:]\n",
    "    wd = f_beam['water_depth'][:]\n",
    "    alt13_df = pd.DataFrame({'Latitude':lat,'Longitude':long,'SurfaceH':ws,\n",
    "                            'SH_SD':ws_sd, 'SH_error':ws_err,'OrthoH':ortho,\n",
    "                            'WaterD':wd})\n",
    "    return alt13_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gt21 = alt13_to_df(file_latest,'gt2r')\n",
    "\n",
    "# mapping the water surface\n",
    "fig=plt.figure(figsize=(6,4))\n",
    "ax = fig.add_subplot(111)\n",
    "ax.plot(gt21['Longitude'],gt21['SurfaceH'],markersize=0.25, label='all segements')\n",
    "h_leg=ax.legend()\n",
    "plt.title('Water Surface')\n",
    "ax.set_xlabel('Longitude')\n",
    "ax.set_ylabel('Water Surface, m')\n",
    "plt.show()\n",
    "\n",
    "# mapping the orthometri heights\n",
    "fig=plt.figure(figsize=(6,4))\n",
    "ax = fig.add_subplot(111)\n",
    "ax.plot(gt21['Longitude'],gt21['OrthoH'],markersize=0.25, label='all segements')\n",
    "h_leg=ax.legend()\n",
    "plt.title('Orthometric Heights')\n",
    "ax.set_xlabel('Longitude')\n",
    "ax.set_ylabel('Orthometric Heights, m')\n",
    "plt.show()\n",
    "\n",
    "# mapping the water depth\n",
    "fig=plt.figure(figsize=(6,4))\n",
    "ax = fig.add_subplot(111)\n",
    "ax.plot(gt21['Longitude'],gt21['WaterD'],markersize=0.25, label='all segements')\n",
    "h_leg=ax.legend()\n",
    "plt.title('Water Depth')\n",
    "ax.set_xlabel('Longitude')\n",
    "ax.set_ylabel('Water Depth, m')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Through mapping three types of water level, the `water depth` is insignificant. Since the `water depth` was calculated from differences between the mean water surface to detected bottom, difficulties in detecting bottom might lead to inaccurate water depth. The Tonle Sap is large lake so the water depth might be useless, but other small water bodies (e.g., wetlands) might show accurate water depth.\n",
    "We need to figure out what `water surface height` and `orthometric height` indicate. According to the document, `water surface height` is height reported for each short segment with reference to WGS84 ellipsoid. `orthometric height` is height EGM2008 converted from ellipsoidal height."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:notebook] *",
   "language": "python",
   "name": "conda-env-notebook-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
