{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Data wrangling\n",
    "\n",
    "This notebook loads the different datasets used in the analysis into a single NETCDF4 file, with descriptive attributes maintained for each dataset. The datasets used in this notebook are listed below. The output file is accessible in the google bucket for this jupyter book and loaded in each notebook. \n",
    " \n",
    "**Input**:\n",
    " - [ICESat-2 monthly gridded sea ice data](https://icesat-2.gsfc.nasa.gov/sea-ice-thickness-data)\n",
    " - [Monthly NSIDC sea ice concentration data](https://nsidc.org/data/g02202)\n",
    " - [NSIDC region mask](https://nsidc.org/data/polar-stereo/tools_masks.html#region_masks) and [coordinate](https://nsidc.org/data/polar-stereo/tools_geo_pixel.html) tools (psn25 v3)\n",
    " - [ERA 5 climate renanalysis data](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels-monthly-means?tab=overview)\n",
    " - [PIOMAS mean monthly ice thickness](http://psc.apl.uw.edu/research/projects/arctic-sea-ice-volume-anomaly/)\n",
    " - [NSIDC sea ice motion vectors](https://nsidc.org/data/nsidc-0116)\n",
    "     \n",
    "\n",
    "**Output**: \n",
    " - NETCDF4 file "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```{note}\n",
    "This notebook is **NOT** configured to run in Google Colab. This file generated by this notebook is also provided google bucket for this book. [Click here](https://storage.googleapis.com/icesat2-book-data/icesat2-book-dataset.nc) to download the dataset.\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import notebook dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": [
     "remove-output"
    ]
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import numpy as np\n",
    "import numpy.ma as ma\n",
    "import xarray as xr\n",
    "import pandas as pd\n",
    "import scipy.interpolate\n",
    "import pyproj\n",
    "from datetime import date\n",
    "from glob import glob\n",
    "import sys\n",
    "\n",
    "# Ignore warnings in the notebook to improve display\n",
    "# You might want to remove this when debugging/writing new code\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Define filepaths\n",
    "Define filepaths to data on your local machine"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#path to monthly gridded ICESat-2 data\n",
    "IS2_path = ''\n",
    "\n",
    "#path to NSIDC weekly sea ice concentration data\n",
    "SIC_path = ''\n",
    "\n",
    "#path to NSIDC region mask for the Arctic\n",
    "regionMask_path = ''\n",
    "\n",
    "#path to ERA5 climate reanalysis data \n",
    "ERA5_path = ''\n",
    "\n",
    "#path to PIOMAS data\n",
    "PIOMAS_path = ''\n",
    "\n",
    "#path to NSIDC sea ice drift data \n",
    "drift_path = ''"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "remove-cell"
    ]
   },
   "source": [
    "### Path on Nicole's computer\n",
    "Cells removed in jupyter book"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": [
     "remove-cell"
    ]
   },
   "outputs": [],
   "source": [
    "#path to directory of data\n",
    "localDirectory = '/Users/nicolekeeney/github_repos/icesat2-book-data/'\n",
    "\n",
    "#path to monthly gridded ICESat-2 data\n",
    "IS2_path = localDirectory + 'cpom_thickness/' \n",
    "\n",
    "#path to NSIDC weekly sea ice concentration data\n",
    "SIC_path = localDirectory + 'SIC/'\n",
    "\n",
    "#path to NSIDC region mask for the Arctic\n",
    "regionMask_path = localDirectory + 'regionMask/'\n",
    "\n",
    "#path to ERA5 climate reanalysis data \n",
    "ERA5_filename = 'ERA5-temp-and-rad.nc'\n",
    "ERA5_path = localDirectory  + 'ERA5/' + ERA5_filename #path to a file, not a directory\n",
    "\n",
    "#path to PIOMAS data\n",
    "PIOMAS_path = localDirectory + 'PIOMAS_monthly_thickness/'\n",
    "\n",
    "#path to NSIDC sea ice drift data \n",
    "drift_path = localDirectory + 'drifts/'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Load in data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Set desired date range for data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Define a function to get winter date range \n",
    "Since we are interested in winter sea ice growth, we'll want to restrict our datasets to only get winter data. Winter is defined as the months of November, December, January, February, March, and April. \n",
    " - This function is also defined in the mapping notebook "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def getWinterDateRange(start_year, end_year): \n",
    "    \"\"\" Gets date range for winter season/s\n",
    "    Args: \n",
    "        start_year (int): start year \n",
    "        end_year (int): end year \n",
    "        \n",
    "    Returns: \n",
    "        winters (list): list of dates for all winter seasons in the input range (i.e: ['1980-11','1980-12','1981-01',\n",
    "         '1981-02','1981-03','1981-04')\n",
    "    \"\"\"\n",
    "    winters = []\n",
    "    for year in range(start_year, end_year, 1):\n",
    "        winters += pd.date_range(start = str(year) + '-11', end = str(year + 1) + '-04', freq = 'MS')\n",
    "    return winters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Call function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "startYear = 2018\n",
    "endYear = 2020\n",
    "winters = getWinterDateRange(startYear, endYear) #get date range for winter 18-19 and winter 19-20"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### ICESat-2 monthly gridded sea ice data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Define function to load data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def getIS2Data(dataPath, dates): \n",
    "    \"\"\"Gets ICESat-2 data for provided date range\n",
    "    \n",
    "    Args: \n",
    "        dataPath (str): path to local directory of ICESat-2 data \n",
    "        dates (list): pandas Timestamp objects generated by getWinterDateRange\n",
    "        \n",
    "    Returns: \n",
    "        is2 (xarray dataset): ICESat-2 data or NONE if file does not exist for inputted date range\n",
    "    \"\"\"\n",
    "    is2List = [] #empty list for compiling xarray DataArray objects\n",
    "    for date in dates: \n",
    "        try:\n",
    "            filename = glob(dataPath+ 'IS2*' + date.strftime('%y') + date.strftime('%m') + '*.nc')[0]\n",
    "        except: \n",
    "            print('Cannot find files; check date range, filepath or if glob is imported')\n",
    "            return None\n",
    "        is2 = xr.open_dataset(filename)\n",
    "        is2 = is2.assign_coords({'time': date})\n",
    "        is2List.append(is2)\n",
    "    is2Data = xr.concat(is2List, dim = 'time') #concatenate all DataArray objects into a single DataArray\n",
    "    \n",
    "    return is2Data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Call function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "is2 = getIS2Data(IS2_path, winters)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Clean data and extract useful variables"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#drop projection variable \n",
    "is2 = is2.drop('projection')\n",
    "\n",
    "#get lat and lon\n",
    "is2Lats = is2.latitude.isel(time = 0).values\n",
    "is2Lons = is2.longitude.isel(time = 0).values\n",
    "is2LonsAttrs = is2.longitude.attrs \n",
    "is2LatsAttrs = is2.latitude.attrs\n",
    "\n",
    "#assign lat and lon as coordinates to dataset\n",
    "is2 = is2.assign_coords(coords = {'latitude': (('x','y'), is2Lats), 'longitude': (('x','y'), is2Lons)})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### NSIDC sea ice concentration data "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Define function to load data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def getSICData(dataPath, dates): \n",
    "    \"\"\"Gets sea ice concentration data for provided date range\n",
    "    \n",
    "    Args: \n",
    "        dataPath (str): path to local directory of sea ice concentration data \n",
    "        dates (list): pandas Timestamp objects generated by getWinterDateRange\n",
    "        \n",
    "    Returns: \n",
    "        sicData (xarray dataset): sea ice concentration data or NONE if file does not exist for inputted date range\n",
    "    \"\"\"\n",
    "    sicList = [] #empty list for compiling xarray DataArray objects\n",
    "    for date in dates: \n",
    "        try:\n",
    "            filename = glob(dataPath+ 'seaice_conc_monthly*' + date.strftime('%Y') + date.strftime('%m') + '*.nc')[0]\n",
    "        except: \n",
    "            print('Cannot find files; check date range, filepath or if glob is imported')\n",
    "            return None\n",
    "        dropVariables = ['goddard_merged_seaice_conc_monthly','goddard_bt_seaice_conc_monthly', 'goddard_nt_seaice_conc_monthly']\n",
    "        sicList.append(xr.open_dataset(filename, drop_variables = dropVariables))\n",
    "    sicData = xr.concat(sicList , dim = 'time', combine_attrs = 'override', data_vars = 'minimal') #concatenate all DataArray objects into a single DataArray\n",
    "    \n",
    "    return sicData"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Call function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sic = getSICData(SIC_path, winters)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Clean data\n",
    "Clean data by removing flagged data and filling the pole hole. We assume concentration is 100% within the pole hole because we are only looking at winter data. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#remove flagged data \n",
    "sic_monthly_cdr = sic['seaice_conc_monthly_cdr'].where(sic['seaice_conc_monthly_cdr'] < 0)\n",
    "\n",
    "#fill pole hole as 100% concentration\n",
    "sic_monthly_cdr = sic['seaice_conc_monthly_cdr'].where(sic['latitude'] < 88, 1)\n",
    "\n",
    "#reassign variable\n",
    "sic = sic.assign(seaice_conc_monthly_cdr = sic_monthly_cdr)\n",
    "\n",
    "#reassign dimensions \n",
    "seaice_conc_monthly_cdr = xr.DataArray(data = sic['seaice_conc_monthly_cdr'], dims = ['time', 'x', 'y'], coords = {'time': winters, 'latitude': (('x','y'), is2Lats), 'longitude': (('x','y'), is2Lons)}) \n",
    "\n",
    "#attributes from entire NSIDC data to maintain\n",
    "desiredSICAttrs = ['title', 'references', 'contributor_name', 'license', 'summary'] #attributes to maintain from entire sea ice concentration dataset\n",
    "SICDatasetAttrs = {x:sic.attrs[x] for x in desiredSICAttrs}\n",
    "seaice_conc_monthly_cdr = seaice_conc_monthly_cdr.assign_attrs(SICDatasetAttrs)\n",
    "\n",
    "#add to is2 dataset \n",
    "is2['seaice_conc_monthly_cdr'] = seaice_conc_monthly_cdr"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### NSIDC region mask for the Arctic data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Define function to load data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def getRegionMask(dataPath): \n",
    "    \"\"\"Gets NSIDC region mask for map projection\n",
    "    \n",
    "     Args:\n",
    "         dataPath (str): path to NSIDC region mask\n",
    "         \n",
    "     Returns: \n",
    "         shapedMask (numpy array): NSIDC arctic region mask gridded to shape [448, 304]\n",
    "         shapedLons (numpy array): longitudes gridded to shape [448, 304]\n",
    "         shapedLats (numpy array): latitudes gridded to shape [448, 304] \n",
    "    \"\"\" \n",
    "    gridShape = [448, 304] #shape of grid to reshape data to \n",
    "    \n",
    "    regionMask = open(dataPath + '/sect_fixed_n.msk', 'rb') #open region mask \n",
    "    shapedMask = np.reshape(np.fromfile(file = regionMask, dtype='uint8'), gridShape) #reshape mask to grid shape\n",
    "    \n",
    "    maskLons = open(dataPath + '/psn25lons_v3.dat', 'rb') #open region mask longitudes\n",
    "    maskLats = open(dataPath + '/psn25lats_v3.dat', 'rb') #open region mask latitudes\n",
    "    shapedLons = np.reshape(np.fromfile(file = maskLons, dtype='<i4')/100000., gridShape) #reshape longitudes to grid shape\n",
    "    shapedLats = np.reshape(np.fromfile(file = maskLats, dtype='<i4')/100000., gridShape) #reshape latitudes to grid shape\n",
    "\n",
    "    return shapedMask, shapedLons, shapedLats"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Call function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "regionMask, maskLons, maskLats = getRegionMask(regionMask_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Define descriptive information\n",
    "These variables will be used later for creating the dataset. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#coords and attributes for Region Mask\n",
    "regionMaskCoords = {'region_mask': (('x','y'), regionMask)}\n",
    "regionMaskKeys = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 20, 21])\n",
    "regionMaskLabels = np.array(['non-region oceans', 'Sea of Okhotsk and Japan','Bering Sea','Hudson Bay','Gulf of St. Lawrence',\n",
    "                    'Baffin Bay, Davis Strait & Labrador Sea','Greenland Sea', 'Barents Seas','Kara Sea','Laptev Sea','East Siberian Sea',\n",
    "                    'Chukchi Sea','Beaufort Sea','Canadian Archipelago','Arctic Ocean','Land','Coast'])\n",
    "regionMaskAttrs = {'description': 'NSIDC region mask for the Arctic', 'keys': regionMaskKeys, 'labels' : regionMaskLabels, 'note': 'keys and labels ordered to match by index'}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### ERA5 climate reanalysis data "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ERA5 = xr.open_dataset(ERA5_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Clean data\n",
    "Clean data by removing unneccessary variables and converting temperature to Celcius"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#remove unneeded expver variable. \n",
    "#for more info on the exper variable, see https://confluence.ecmwf.int/pages/viewpage.action?pageId=173385064\n",
    "ERA5 = ERA5.sel(expver = 1)\n",
    "ERA5 = ERA5.drop('expver')\n",
    "\n",
    "#select data from past two winters \n",
    "ERA5 = ERA5.sel(time = getWinterDateRange(2018, 2020))\n",
    "ERA5 = ERA5.assign_coords(time = getWinterDateRange(2018, 2020))\n",
    "\n",
    "#convert t2m temp to celcius \n",
    "tempCelcius = ERA5['t2m'] - 283.15\n",
    "tempCelcius.attrs['units'] = 'C' #change units attribute to C (Celcius)\n",
    "tempCelcius.attrs['long_name'] = '2 meter temperature'\n",
    "ERA5 = ERA5.assign(t2m = tempCelcius) #add to dataset as a new data variable\n",
    "\n",
    "#add descriptive attributes \n",
    "ERA5.attrs = {'description': 'era5 monthly averaged data on single levels from 1979 to present', \n",
    "              'website': 'https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels-monthly-means?tab=overview', \n",
    "              'contact': 'copernicus-support@ecmwf.int',\n",
    "             'citation': 'Copernicus Climate Change Service (C3S) (2017): ERA5: Fifth generation of ECMWF atmospheric reanalyses of the global climate . Copernicus Climate Change Service Climate Data Store (CDS), July 2020. https://cds.climate.copernicus.eu/cdsapp#!/home'}\n",
    "\n",
    "#restrict ERA5 data to the Arctic \n",
    "ERA5 = ERA5.where(ERA5.latitude > 50)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### PIOMAS sea ice thickness data "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Define function to load the data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def getPIOMASData(dataPath, startYear = 1978, endYear = 2020): \n",
    "    \"\"\"Gets PIOMAS mean monthly sea ice thickness data for an input time range. \n",
    "    \n",
    "    Args: \n",
    "        dataPath (str): path to local directory of PIOMAS data \n",
    "        startYear (int): year to start loading data from (defaul to 1978)\n",
    "        endYear (int): year to stop loading data from (default to 2020)\n",
    "    \n",
    "    Returns: \n",
    "        PIOMAS_data (xarray DataArray): PIOMAS data with descriptive coordinates and attributes\n",
    "        \n",
    "    Note: last available PIOMAS data was from July 2020 at the time of creation of this function\n",
    "    \"\"\"\n",
    "    dataList = [] #empty list for compiling data\n",
    "\n",
    "    for year in range(startYear, endYear + 1, 1):     \n",
    "        data = open(dataPath + '/heff.H' + str(year), 'rb') \n",
    "        if year == 2020: #need special reshaping for 2020 because we dont have a full year of data\n",
    "            period = 7\n",
    "        else:\n",
    "            period = 12\n",
    "        dataList += list(np.fromfile(file = data, dtype='f').reshape([period, 120, 360]))\n",
    "    \n",
    "    #add latitude and longitude \n",
    "    gridP = np.loadtxt(dataPath + 'grid.dat.txt')\n",
    "    lonsP = gridP[0:4320, :].flatten()\n",
    "    lonsP = np.reshape(lonsP, [120,360])\n",
    "    latsP = gridP[4320:, :].flatten()\n",
    "    latsP = np.reshape(latsP, [120,360])\n",
    "    \n",
    "    #load dataList as an xarray DataArray with descriptive attributes and coordinates\n",
    "    time = pd.date_range(start = str(startYear), end = str(endYear) + '-07', freq = 'MS')\n",
    "    PIOMAS_attrs = {'units': 'meters', 'long_name': 'PIOMAS sea ice thickness', 'data description': 'PIOMAS monthly mean sea ice thickness', 'citation': 'Zhang, J.L. and D.A. Rothrock, “Modeling global sea ice with a thickness and enthalpy distribution model in generalized curvilinear coordinates“, Mon. Weather Rev., 131, 845-861, 2003'}\n",
    "    PIOMAS_data = xr.DataArray(dataList, dims = ['time','x','y'], coords = {'time': time, 'longitude': (('x','y'), lonsP), 'latitude': (('x','y'), latsP)}, attrs = PIOMAS_attrs)\n",
    "    \n",
    "    return PIOMAS_data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Call function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "piomasData = getPIOMASData(PIOMAS_path, startYear = 1978, endYear = 2020)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### NSIDC sea ice drift data "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Define function to load the data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def getDriftData(dataPath):\n",
    "    \"\"\"Gets weekly NSIDC sea ice drift data for the Arctic, resamples monthly, and returns a dataset\n",
    "    \n",
    "    Args: \n",
    "        dataPath (str): path to local directory of weekly drift data \n",
    "\n",
    "    Returns: \n",
    "        monthlyDrifts (xarray DataArray): sea ice drift data, resampled to monthly means, with descriptive coordinates and attributes\n",
    "        \n",
    "    Note: This function uses two different datasets from the NSIDC for drift in order to get the most recent data. Drift dataset from 1978-2018 was combined with quicklook drift dataset from 2019-present, maintaining attitributes of the 1978-2018 dataset\n",
    "    \"\"\"\n",
    "    def get_uv_from_xy(xdrift, ydrift, lon):\n",
    "        \"\"\"convert the drift vectors to zonal/meridional, and calculate magnitude\n",
    "        \"\"\"\n",
    "        alpha = lon*np.pi/180. #convert longitudes to radians \n",
    "        uvelT = ydrift*np.sin(alpha) + xdrift*np.cos(alpha)\n",
    "        vvelT = ydrift*np.cos(alpha) - xdrift*np.sin(alpha)\n",
    "        vectorMag = np.sqrt(uvelT**2 + vvelT**2)\n",
    "        return uvelT, vvelT, vectorMag\n",
    "    \n",
    "    #combine nc weekly files into a single xarray dataset\n",
    "    files = [xr.open_dataset(dataPath + f) for f in os.listdir(dataPath) if os.path.isfile(dataPath + f) and f.endswith('.nc')]\n",
    "    weeklyDrifts = xr.concat(files, dim = 'time')\n",
    "\n",
    "    #get transformed u,v variables and add to dataset\n",
    "    uvelT, vvelT, vectorMag = get_uv_from_xy(weeklyDrifts.u, weeklyDrifts.v, weeklyDrifts.longitude)\n",
    "    weeklyDrifts = weeklyDrifts.assign(drifts_uT = uvelT, drifts_vT = vvelT, drifts_magnitude = vectorMag)\n",
    "    \n",
    "    #resample to get monthly data \n",
    "    monthlyDrifts = weeklyDrifts.resample(time='MS', keep_attrs = True).mean()\n",
    "\n",
    "    #convert to same time format as ICESat-2 \n",
    "    monthlyDrifts = monthlyDrifts.assign_coords(time = [pd.to_datetime(date.strftime('%m-%d-%Y')) for date in monthlyDrifts.time.values])\n",
    "    \n",
    "    #add attributes\n",
    "    monthlyDrifts.drifts_uT.attrs = {'description':'along-x component of the ice motion (u variable) converted to zonal/meridional', 'units':'cm/s', 'long_name':'sea ice x velocity'}\n",
    "    monthlyDrifts.drifts_vT.attrs = {'description':'along-y component of the ice motion (v variable) converted to zonal/meridional', 'units':'cm/s', 'long_name':'sea ice y velocity'}\n",
    "    monthlyDrifts.drifts_magnitude.attrs = {'long_name': 'drift vector magnitude', 'units':'cm/s'}\n",
    "    monthlyDrifts.attrs['njkeeney comment'] = 'drift dataset from 1978-2018 was combined with quicklook drift dataset from 2019-present, maintaining attitributes of the 1978-2018 dataset'\n",
    "    monthlyDrifts.attrs['citation'] = 'Tschudi, M., W. N. Meier, J. S. Stewart, C. Fowler, and J. Maslanik. 2019. Polar Pathfinder Daily 25 km EASE-Grid Sea Ice Motion Vectors, Version 4. Weekly sea ice motion. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. doi: https://doi.org/10.5067/INAWUWO7QH7B. July 2020.'\n",
    "    for var in monthlyDrifts.data_vars: \n",
    "        if var not in ['drifts_magnitude','drifts_uT', 'drifts_vT']: \n",
    "            monthlyDrifts[var].attrs = weeklyDrifts[var].attrs #resampling function causes attributes to be lost, so add them back here\n",
    "    \n",
    "    return monthlyDrifts"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Call function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "drifts = getDriftData(drift_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Restrict data to winter months"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "drifts = drifts.sel(time = winters)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Interpolate missing ICESat-2 data \n",
    "Interpolate missing ICESat-2 data using the [scipy.griddata.interpolate](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html) function and add as data variables to the dataset. Because ICESat-2 doesn't provide full monthly coverage, interpolating fills missing grid cells with a best guess based on surrounding data. This helps avoid sampling biases when performing time series analyses, with the cavaet that this interpolation method is subjective. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#list of variables to interpolate\n",
    "IS2VarList = ['ice_type','ice_thickness','snow_depth','freeboard','ice_thickness_unc','snow_density','ice_density']\n",
    "\n",
    "#go through variables in list and add as new data variables to ICESat-2 dataset\n",
    "for varStr in IS2VarList:\n",
    "    \n",
    "    #empty list to store monthly interpolated data \n",
    "    varFilled = []\n",
    "    \n",
    "    for month in range(len(is2.time)): \n",
    "        #current month of data \n",
    "        monthlyVar = is2[varStr].values[month]\n",
    "        \n",
    "        #additional condition for interpolating ice thickness\n",
    "        if varStr == 'ice_thickness': \n",
    "            #if var is ice_thickness_int, set ice_thickness to zero if ice_thickness is NaN and sea ice concentration < 15%\n",
    "            monthlyVar[seaice_conc_monthly_cdr.values[month] <= 0.15] = 0\n",
    "        \n",
    "        #conditions for cells to interpolate\n",
    "        monthlyVar = ma.masked_where((np.isnan(monthlyVar)) & (regionMask != 20) & (regionMask != 14) & (seaice_conc_monthly_cdr.values[month] > 0.15), monthlyVar)\n",
    "        \n",
    "        #append interpolated data to list \n",
    "        varFilled.append(scipy.interpolate.griddata((is2Lons[~monthlyVar.mask], is2Lats[~monthlyVar.mask]), \n",
    "                    monthlyVar[~monthlyVar.mask].flatten(),(is2Lons, is2Lats), method = 'nearest'))\n",
    "    \n",
    "    #convert varFilled to a DataArray object \n",
    "    varFilledDataArray = xr.DataArray(data = varFilled, dims = ['time', 'x', 'y'], attrs = is2[varStr].attrs)\n",
    "    varFilledDataArray.attrs['note'] = 'interpolated from original data'\n",
    "    \n",
    "    #add as new data variable to ICESat-2 dataset\n",
    "    is2[varStr + '_filled'] = varFilledDataArray"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Add descriptive attributes & coordinates to all ICESat-2 variables"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#assign ICESat-2 dataset attributes to all variables \n",
    "for var in is2.data_vars:\n",
    "    is2[var] = is2[var].assign_attrs(is2.attrs)\n",
    "\n",
    "#drop lat and lon as data variables and add as coordinate values \n",
    "is2 = is2.assign_coords(coords = {'latitude': (('x','y'), is2Lats), 'longitude': (('x','y'), is2Lons)})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Regrid additional datasets to ICESat-2 grid \n",
    "In order to merge ERA5 and PIOMAS data to the same dataset as ICESat-2, it needs to be on the same grid. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Define regridding function \n",
    "This function will grid data to the ICESat-2 grid using the scipy.interpolate.griddata function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": [
     "hide-cell"
    ]
   },
   "outputs": [],
   "source": [
    "#used in code development, bar hidden in notebook outputs \n",
    "def progressBar(i, tot): \n",
    "    \"\"\"Display a progress bar inside a for loop \n",
    "    Args: \n",
    "        i (int): iteration number\n",
    "        tot (int): total number of iterations\n",
    "    \"\"\"\n",
    "    j = (i + 1) / tot\n",
    "    sys.stdout.write('\\r [%-20s] %d%% complete' % ('='*int(20*j), 100*j))\n",
    "    sys.stdout.flush()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def regridToICESat2(dataArrayNEW, xptsNEW, yptsNEW, xptsIS2, yptsIS2):  \n",
    "    \"\"\" Regrid new data to ICESat-2 grid \n",
    "    \n",
    "    Args: \n",
    "        dataArrayNEW (xarray DataArray): DataArray to be gridded to ICESat-2 grid \n",
    "        xptsNEW (numpy array): x-values of dataArrayNEW projected to ICESat-2 map projection \n",
    "        yptsNEW (numpy array): y-values of dataArrayNEW projected to ICESat-2 map projection \n",
    "        xptsIS2 (numpy array): ICESat-2 longitude projected to ICESat-2 map projection\n",
    "        yptsIS2 (numpy array): ICESat-2 latitude projected to ICESat-2 map projection\n",
    "    \n",
    "    Returns: \n",
    "        gridded (numpy array): data regridded to ICESat-2 map projection\n",
    "    \n",
    "    \"\"\"\n",
    "    gridded = []\n",
    "    for i in range(len(dataArrayNEW.values)): \n",
    "        monthlyGridded = scipy.interpolate.griddata((xptsNEW.flatten(),yptsNEW.flatten()), dataArrayNEW.values[i].flatten(), (xptsIS2, yptsIS2), method = 'nearest')\n",
    "        gridded.append(monthlyGridded)\n",
    "        progressBar(i, len(dataArrayNEW.values))\n",
    "    return np.array(gridded)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Regrid ERA5 data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Choose data variables of interest \n",
    "ERA5 provides climate reananalysis data for many different variables. Here, choose data variables to maintain in the final gridded data product. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ERA5Vars = ['t2m','msdwlwrf']\n",
    "print('chosen variables: ' + '%s' % ', '.join(map(str, [ERA5[var].attrs['long_name'] for var in ERA5Vars])))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Regrid ERA5 data & add to ICESat-2 dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": [
     "remove-output"
    ]
   },
   "outputs": [],
   "source": [
    "#map projection from ICESat-2 data (can be viewed in is2.projection.attrs['srid'])\n",
    "IS2_proj = 'EPSG:3411'\n",
    "\n",
    "#initialize map projection and project data to it\n",
    "mapProj = pyproj.Proj(\"+init=\" + IS2_proj)\n",
    "xptsERA, yptsERA = mapProj(*np.meshgrid(ERA5.longitude.values, ERA5.latitude.values))\n",
    "xptsIS2, yptsIS2 = mapProj(is2Lons, is2Lats)\n",
    "\n",
    "#grid data\n",
    "for var in ERA5Vars: \n",
    "    #regrid data by calling function\n",
    "    ERA5gridded = regridToICESat2(ERA5[var], xptsERA, yptsERA, xptsIS2, yptsIS2)\n",
    "    \n",
    "    #create xarray DataArray object with descriptive coordinates \n",
    "    ERAArray = xr.DataArray(data = ERA5gridded, dims = ['time', 'x', 'y'], coords = {'latitude': (('x','y'), is2Lats), 'longitude': (('x','y'), is2Lons)})\n",
    "    ERAArray.attrs = ERA5[var].attrs\n",
    "    ERAArray = ERAArray.assign_attrs(ERA5.attrs)\n",
    "\n",
    "    #add to ICESat-2 dataset\n",
    "    is2[var] = ERAArray"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Regrid PIOMAS data "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": [
     "remove-output"
    ]
   },
   "outputs": [],
   "source": [
    "#project data to ICESat-2 map projection\n",
    "xptsPIO, yptsPIO = mapProj(piomasData.longitude.values, piomasData.latitude.values)\n",
    "\n",
    "#regrid data by calling function\n",
    "PIOgridded = regridToICESat2(piomasData, xptsPIO, yptsPIO, xptsIS2, yptsIS2)\n",
    "\n",
    "#create xarray DataArray object with descriptive coordinates \n",
    "PIOArray = xr.DataArray(data = PIOgridded, dims = ['time', 'x', 'y'], coords = {'latitude': (('x','y'), is2Lats), 'longitude': (('x','y'), is2Lons)})\n",
    "PIOArray = PIOArray.assign_coords(time = piomasData.time.values)\n",
    "PIOArray = PIOArray.assign_attrs(piomasData.attrs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Regrid NSIDC sea ice drift data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": [
     "remove-output"
    ]
   },
   "outputs": [],
   "source": [
    "#project data to ICESat-2 map projection\n",
    "xptsDRIFTS, yptsDRIFTS = mapProj(drifts.longitude.values[0], drifts.latitude.values[0])\n",
    "\n",
    "for var in ['drifts_uT', 'drifts_vT', 'drifts_magnitude']: \n",
    "    #regrid data by calling function\n",
    "    driftsGridded = regridToICESat2(drifts[var], xptsDRIFTS, yptsDRIFTS, xptsIS2, yptsIS2)\n",
    "    \n",
    "    #create xarray DataArray object with descriptive coordinates \n",
    "    driftsArray = xr.DataArray(data = driftsGridded, dims = ['time', 'x', 'y'], coords = {'latitude': (('x','y'), is2Lats), 'longitude': (('x','y'), is2Lons)})\n",
    "    driftsArray.attrs = drifts[var].attrs\n",
    "    driftsArray = driftsArray.assign_attrs(drifts.attrs)\n",
    "\n",
    "    #add to ICESat-2 dataset\n",
    "    is2[var] = driftsArray"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Save regridded PIOMAS file to local directory\n",
    "Add region mask as descriptive coordinates to the PIOMAS regridded DataArray and save file to local directory"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#add region mask as coordinate to dataset\n",
    "piomas_to_save = PIOArray.assign_coords(coords = regionMaskCoords)\n",
    "\n",
    "#add descriptive attributes \n",
    "piomas_to_save.region_mask.attrs = regionMaskAttrs\n",
    "\n",
    "#create a dataset\n",
    "piomas_to_save = xr.Dataset(data_vars = {'PIOMAS_ice_thickness': piomas_to_save})\n",
    "\n",
    "#save to local directory as NETCDF4 file \n",
    "filename = 'piomas-regridded-data.nc'\n",
    "piomas_to_save.to_netcdf(path = localDirectory + filename, format = 'NETCDF4', mode = 'w')\n",
    "print('File ' + '\"%s\"' % filename + ' saved to directory ' + '\"%s\"' % localDirectory)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Add winter regridded PIOMAS data to ICESat-2 dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#restrict data to same time period as ICESat-2\n",
    "PIOArray = PIOArray.sel(time = winters)\n",
    "\n",
    "#add to ICESat-2 dataset\n",
    "is2['PIOMAS_ice_thickness'] = PIOArray"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Compile datasets into a single Dataset "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#add region mask as coordinate to dataset\n",
    "is2 = is2.assign_coords(coords = regionMaskCoords)\n",
    "\n",
    "#add descriptive attributes \n",
    "is2.region_mask.attrs = regionMaskAttrs\n",
    "is2.longitude.attrs = is2LonsAttrs\n",
    "is2.latitude.attrs = is2LatsAttrs\n",
    "is2.attrs = {'description':'data used in nicolejkeeney ICESat-2 jupyter book', 'note': 'see individual data variables for references', 'creation date': str(date.today())}\n",
    "\n",
    "print(is2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Save dataset to your local machine \n",
    "We will use this dataset in other notebooks to plot and analyze the data. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "filename = 'icesat2-book-data.nc'\n",
    "is2.to_netcdf(path = localDirectory + filename, format = 'NETCDF4', mode = 'w')\n",
    "print('File ' + '\"%s\"' % filename + ' saved to directory ' + '\"%s\"' % localDirectory)\n",
    "is2.close()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "celltoolbar": "Tags",
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
