{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-24T14:13:48.731159Z",
     "start_time": "2020-04-24T14:13:48.725142Z"
    }
   },
   "source": [
    "<style>\n",
    "pre {\n",
    " white-space: pre-wrap !important;\n",
    "}\n",
    ".table-striped > tbody > tr:nth-of-type(odd) {\n",
    "    background-color: #f9f9f9;\n",
    "}\n",
    ".table-striped > tbody > tr:nth-of-type(even) {\n",
    "    background-color: white;\n",
    "}\n",
    ".table-striped td, .table-striped th, .table-striped tr {\n",
    "    border: 1px solid black;\n",
    "    border-collapse: collapse;\n",
    "    margin: 1em 2em;\n",
    "}\n",
    ".rendered_html td, .rendered_html th {\n",
    "    text-align: left;\n",
    "    vertical-align: middle;\n",
    "    padding: 4px;\n",
    "}\n",
    "</style>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# I/O Kung-Fu: get your data in and out of [Vaex](https://github.com/vaexio/vaex)\n",
    "\n",
    "If you want to try out this notebook with a live Python kernel, use mybinder:\n",
    "\n",
    "<a class=\"reference external image-reference\" href=\"https://mybinder.org/v2/gh/vaexio/vaex/latest?filepath=docs%2Fsource%2Fexample_io.ipynb\"><img alt=\"https://mybinder.org/badge_logo.svg\" src=\"https://mybinder.org/badge_logo.svg\" width=\"150px\"></a>\n",
    "\n",
    "\n",
    "## Data input\n",
    "\n",
    "Every project starts with reading in some data. Vaex supports several data sources:\n",
    "\n",
    "- Binary file formats:\n",
    " \n",
    "    - [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format#HDF5)\n",
    "    - [Apache Arrow](https://arrow.apache.org/)\n",
    "    - [Apache Parquet](https://parquet.apache.org/)\n",
    "    - [FITS](https://en.wikipedia.org/wiki/FITS)\n",
    "     \n",
    "- Text based file formats:\n",
    " \n",
    "    - [CSV](https://en.wikipedia.org/wiki/Comma-separated_values)\n",
    "    - [ASCII](https://en.wikipedia.org/wiki/Text_file)\n",
    "    - [JSON](https://www.json.org/json-en.html)\n",
    "     \n",
    "- In-memory data representations:\n",
    " \n",
    "    - [pandas](https://pandas.pydata.org/) DataFrames and everything that pandas can read\n",
    "    - [Apache Arrow](https://arrow.apache.org/) Tables\n",
    "    - [numpy](https://numpy.org/) arrays\n",
    "    - Python dictionaries\n",
    "    - Single row DataFrames\n",
    "\n",
    "- Cloud support:\n",
    "    - Amazon Web Services S3 \n",
    "    - Google Cloud Storage\n",
    "\n",
    "- Extras:\n",
    "    - Aliases\n",
    "     \n",
    "The following examples show the best practices of getting your data in Vaex.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Binary file formats\n",
    "\n",
    "If your data is already in one of the supported binary file formats (HDF5, Apache Arrow, Apache Parquet, FITS), opening it with Vaex rather simple:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:16.281410Z",
     "start_time": "2020-11-10T17:48:14.042436Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>name   </th><th>age  </th><th>city       </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;John&#x27; </td><td>&#x27;17&#x27; </td><td>&#x27;Edinburgh&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Sally&#x27;</td><td>&#x27;33&#x27; </td><td>&#x27;Groningen&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  name     age    city\n",
       "  0  'John'   '17'   'Edinburgh'\n",
       "  1  'Sally'  '33'   'Groningen'"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import vaex\n",
    "\n",
    "# Reading a HDF5 file\n",
    "df_names = vaex.open('./data/io/sample_names_1.hdf5')\n",
    "df_names"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:16.827210Z",
     "start_time": "2020-11-10T17:48:16.286615Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>fruit   </th><th style=\"text-align: right;\">  amount</th><th>origin   </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;mango&#x27; </td><td style=\"text-align: right;\">       5</td><td>&#x27;Malaya&#x27; </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;banana&#x27;</td><td style=\"text-align: right;\">      10</td><td>&#x27;Ecuador&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;orange&#x27;</td><td style=\"text-align: right;\">       7</td><td>&#x27;Spain&#x27;  </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  fruit       amount  origin\n",
       "  0  'mango'          5  'Malaya'\n",
       "  1  'banana'        10  'Ecuador'\n",
       "  2  'orange'         7  'Spain'"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Reading an arrow file\n",
    "df_fruits = vaex.open('./data/io/sample_fruits.arrow')\n",
    "df_fruits"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Opening such data is instantenous regardless of the file size on disk: Vaex will just memory-map the data instead of reading it in memory. This is the optimal way of working with large datasets that are larger than available RAM.\n",
    "\n",
    "If your data is contained within multiple files, one can open them all simultaneously like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:16.888221Z",
     "start_time": "2020-11-10T17:48:16.830299Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>name    </th><th>age  </th><th>city       </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;John&#x27;  </td><td>&#x27;17&#x27; </td><td>&#x27;Edinburgh&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Sally&#x27; </td><td>&#x27;33&#x27; </td><td>&#x27;Groningen&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Maria&#x27; </td><td>&#x27;23&#x27; </td><td>&#x27;Caracas&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Monica&#x27;</td><td>&#x27;55&#x27; </td><td>&#x27;New York&#x27; </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  name      age    city\n",
       "  0  'John'    '17'   'Edinburgh'\n",
       "  1  'Sally'   '33'   'Groningen'\n",
       "  2  'Maria'   '23'   'Caracas'\n",
       "  3  'Monica'  '55'   'New York'"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_names_all = vaex.open('./data/io/sample_names_*.hdf5')\n",
    "df_names_all"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Alternatively, one can use the `open_many` method to pass a list of files to open:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:16.939501Z",
     "start_time": "2020-11-10T17:48:16.890565Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>name    </th><th>age  </th><th>city       </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;John&#x27;  </td><td>&#x27;17&#x27; </td><td>&#x27;Edinburgh&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Sally&#x27; </td><td>&#x27;33&#x27; </td><td>&#x27;Groningen&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Maria&#x27; </td><td>&#x27;23&#x27; </td><td>&#x27;Caracas&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Monica&#x27;</td><td>&#x27;55&#x27; </td><td>&#x27;New York&#x27; </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  name      age    city\n",
       "  0  'John'    '17'   'Edinburgh'\n",
       "  1  'Sally'   '33'   'Groningen'\n",
       "  2  'Maria'   '23'   'Caracas'\n",
       "  3  'Monica'  '55'   'New York'"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_names_all = vaex.open_many(['./data/io/sample_names_1.hdf5', \n",
    "                               './data/io/sample_names_2.hdf5'])\n",
    "df_names_all"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The result will be a single DataFrame object containing all of the data coming from all files."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:16.977034Z",
     "start_time": "2020-11-10T17:48:16.943547Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>car      </th><th>color  </th><th style=\"text-align: right;\">  year</th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;renault&#x27;</td><td>&#x27;red&#x27;  </td><td style=\"text-align: right;\">  1996</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;audi&#x27;   </td><td>&#x27;black&#x27;</td><td style=\"text-align: right;\">  2005</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;toyota&#x27; </td><td>&#x27;blue&#x27; </td><td style=\"text-align: right;\">  2000</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  car        color      year\n",
       "  0  'renault'  'red'      1996\n",
       "  1  'audi'     'black'    2005\n",
       "  2  'toyota'   'blue'     2000"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Reading a parquet file\n",
    "df_cars = vaex.open('./data/io/sample_cars.parquet')\n",
    "df_cars"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Text based file formats\n",
    "\n",
    "Datasets are still commonly stored in text-based file formats such as CSV. Since text-based file formats are not memory-mappable, they have to be read in memory. If the contents of a CSV file fits into the available RAM, one can simply do:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.011162Z",
     "start_time": "2020-11-10T17:48:16.982646Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>city          </th><th>team          </th><th>player           </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;Indianopolis&#x27;</td><td>&#x27;Pacers&#x27;      </td><td>&#x27;Reggie Miller&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Chicago&#x27;     </td><td>&#x27;Bulls&#x27;       </td><td>&#x27;Michael Jordan&#x27; </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Boston&#x27;      </td><td>&#x27;Celtics&#x27;     </td><td>&#x27;Larry Bird&#x27;     </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Minnesota&#x27;   </td><td>&#x27;Timberwolves&#x27;</td><td>&#x27;Kevin Garnett&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i></td><td>&#x27;Miami&#x27;       </td><td>&#x27;Heat&#x27;        </td><td>&#x27;Alonzo Mourning&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  city            team            player\n",
       "  0  'Indianopolis'  'Pacers'        'Reggie Miller'\n",
       "  1  'Chicago'       'Bulls'         'Michael Jordan'\n",
       "  2  'Boston'        'Celtics'       'Larry Bird'\n",
       "  3  'Minnesota'     'Timberwolves'  'Kevin Garnett'\n",
       "  4  'Miami'         'Heat'          'Alonzo Mourning'"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_nba = vaex.from_csv('./data/io/sample_nba_1.csv', copy_index=False)\n",
    "df_nba"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "or alternatively:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.040728Z",
     "start_time": "2020-11-10T17:48:17.017162Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>city          </th><th>team          </th><th>player           </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;Indianopolis&#x27;</td><td>&#x27;Pacers&#x27;      </td><td>&#x27;Reggie Miller&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Chicago&#x27;     </td><td>&#x27;Bulls&#x27;       </td><td>&#x27;Michael Jordan&#x27; </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Boston&#x27;      </td><td>&#x27;Celtics&#x27;     </td><td>&#x27;Larry Bird&#x27;     </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Minnesota&#x27;   </td><td>&#x27;Timberwolves&#x27;</td><td>&#x27;Kevin Garnett&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i></td><td>&#x27;Miami&#x27;       </td><td>&#x27;Heat&#x27;        </td><td>&#x27;Alonzo Mourning&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  city            team            player\n",
       "  0  'Indianopolis'  'Pacers'        'Reggie Miller'\n",
       "  1  'Chicago'       'Bulls'         'Michael Jordan'\n",
       "  2  'Boston'        'Celtics'       'Larry Bird'\n",
       "  3  'Minnesota'     'Timberwolves'  'Kevin Garnett'\n",
       "  4  'Miami'         'Heat'          'Alonzo Mourning'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_nba = vaex.read_csv('./data/io/sample_nba_1.csv', copy_index=False)\n",
    "df_nba"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Vaex is using pandas for reading CSV files in the background, so one can pass any arguments to the `vaex.from_csv` or `vaex.read_csv` as one would pass to `pandas.read_csv` and specify for example separators, column names and column types. The `copy_index` parameter specifies if the index column of the pandas DataFrame should be read as a regular column, or left out to save memory. In addition to this, if you specify the `convert=True` argument, the data will be automatically converted to an HDF5 file behind the scenes, thus freeing RAM and allowing you to work with your data in a memory-efficient, out-of-core manner.\n",
    "\n",
    "If the CSV file is so large that it can not fit into RAM all at one time, one can convert the data to HDF5 simply by:\n",
    "\n",
    "```\n",
    "df = vaex.from_csv('./my_data/my_big_file.csv', convert=True, chunk_size=5_000_000)\n",
    "```\n",
    "\n",
    "When the above line is executed, Vaex will read the CSV in chunks, and convert each chunk to a temporary HDF5 file on disk. All temporary files are then concatenated into a single HDF5 file, and the temporary files deleted. The size of the individual chunks to be read can be specified via the `chunk_size` argument. Note that this automatic conversion requires free disk space of twice the final HDF5 file size.\n",
    "\n",
    "It often happens that the data we need to analyse is spread over multiple CSV files. One can convert them to the HDF5 file format like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.112361Z",
     "start_time": "2020-11-10T17:48:17.045861Z"
    }
   },
   "outputs": [],
   "source": [
    "list_of_files = ['./data/io/sample_nba_1.csv',\n",
    "                 './data/io/sample_nba_2.csv',\n",
    "                 './data/io/sample_nba_3.csv',]\n",
    "\n",
    "# Convert each CSV file to HDF5\n",
    "for file in list_of_files:\n",
    "    df_tmp = vaex.from_csv(file, convert=True, copy_index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The above code block converts in turn each CSV file to the HDF5 format. Note that the conversion will work regardless of the file size of each individual CSV file, provided there is sufficient storage space. \n",
    "\n",
    "Working with all of the data is now easy: just open all of the relevant HDF5 files as described above:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.167419Z",
     "start_time": "2020-11-10T17:48:17.114451Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>city          </th><th>team     </th><th>player          </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;Indianopolis&#x27;</td><td>&#x27;Pacers&#x27; </td><td>&#x27;Reggie Miller&#x27; </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Chicago&#x27;     </td><td>&#x27;Bulls&#x27;  </td><td>&#x27;Michael Jordan&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Boston&#x27;      </td><td>&#x27;Celtics&#x27;</td><td>&#x27;Larry Bird&#x27;    </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Los Angeles&#x27; </td><td>&#x27;Lakers&#x27; </td><td>&#x27;Kobe Bryant&#x27;   </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i></td><td>&#x27;Toronto&#x27;     </td><td>&#x27;Raptors&#x27;</td><td>&#x27;Vince Carter&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>5</i></td><td>&#x27;Philadelphia&#x27;</td><td>&#x27;76ers&#x27;  </td><td>&#x27;Allen Iverson&#x27; </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>6</i></td><td>&#x27;San Antonio&#x27; </td><td>&#x27;Spurs&#x27;  </td><td>&#x27;Tim Duncan&#x27;    </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  city            team       player\n",
       "  0  'Indianopolis'  'Pacers'   'Reggie Miller'\n",
       "  1  'Chicago'       'Bulls'    'Michael Jordan'\n",
       "  2  'Boston'        'Celtics'  'Larry Bird'\n",
       "  3  'Los Angeles'   'Lakers'   'Kobe Bryant'\n",
       "  4  'Toronto'       'Raptors'  'Vince Carter'\n",
       "  5  'Philadelphia'  '76ers'    'Allen Iverson'\n",
       "  6  'San Antonio'   'Spurs'    'Tim Duncan'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = vaex.open('./data/io/sample_nba_*.csv.hdf5')\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One can than additionally export this combined DataFrame to a single HDF5 file. This should lead to minor performance improvements. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.253541Z",
     "start_time": "2020-11-10T17:48:17.170771Z"
    }
   },
   "outputs": [],
   "source": [
    "df.export('./data/io/sample_nba_combined.hdf5')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It is also common the data to be stored in JSON files. To read such data in Vaex one can do:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.270488Z",
     "start_time": "2020-11-10T17:48:17.255493Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>isle           </th><th style=\"text-align: right;\">  size_sqkm</th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;Easter Island&#x27;</td><td style=\"text-align: right;\">    163.6  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Fiji&#x27;         </td><td style=\"text-align: right;\">     18.333</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Tortuga&#x27;      </td><td style=\"text-align: right;\">    178.7  </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  isle               size_sqkm\n",
       "  0  'Easter Island'      163.6\n",
       "  1  'Fiji'                18.333\n",
       "  2  'Tortuga'            178.7"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_isles = vaex.from_json('./data/io/sample_isles.json', orient='table', copy_index=False)\n",
    "df_isles"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is a convenience method which simply wraps `pandas.read_json`, so the same arguments and file reading strategy applies. If the data is distributed amongs multiple JSON files, one can apply a similar strategy as in the case of multiple CSV files: read each JSON file with the `vaex.from_json` method, convert it to a HDF5 or Arrow file format. Than use `vaex.open` or `vaex.open_many` methods to open all the converted files as a single DataFrame. \n",
    "\n",
    "To learn more about different options of exporting data with Vaex, please read the next section below."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Cloud Support\n",
    "\n",
    "Vaex supports streaming of HDF5, Apache Arrow and Apache Parquet files from Amazon's S3 and Google Cloud Storage.\n",
    "Here is an example of streaming an HDF5 file directly from S3:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.428862Z",
     "start_time": "2020-11-10T17:48:17.273166Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                                  </th><th>vendor_id  </th><th>pickup_datetime              </th><th>dropoff_datetime             </th><th>passenger_count  </th><th>payment_type  </th><th>trip_distance     </th><th>pickup_longitude  </th><th>pickup_latitude   </th><th>rate_code  </th><th>store_and_fwd_flag  </th><th>dropoff_longitude  </th><th>dropoff_latitude  </th><th>fare_amount  </th><th>surcharge  </th><th>mta_tax  </th><th>tip_amount        </th><th>tolls_amount     </th><th>total_amount      </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i>      </td><td>VTS        </td><td>2015-02-27 22:11:38.000000000</td><td>2015-02-27 22:22:51.000000000</td><td>5                </td><td>1             </td><td>2.259999990463257 </td><td>-74.00664520263672</td><td>40.707496643066406</td><td>1.0        </td><td>0.0                 </td><td>-74.00959777832031 </td><td>40.734619140625   </td><td>10.0         </td><td>0.5        </td><td>0.5      </td><td>2.0               </td><td>0.0              </td><td>13.300000190734863</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i>      </td><td>VTS        </td><td>2015-08-04 00:36:01.000000000</td><td>2015-08-04 00:47:11.000000000</td><td>1                </td><td>1             </td><td>5.130000114440918 </td><td>-74.0074691772461 </td><td>40.70523452758789 </td><td>1.0        </td><td>0.0                 </td><td>-73.96726989746094 </td><td>40.75519561767578 </td><td>16.0         </td><td>0.5        </td><td>0.5      </td><td>3.4600000381469727</td><td>0.0              </td><td>20.760000228881836</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i>      </td><td>VTS        </td><td>2015-01-28 19:56:52.000000000</td><td>2015-01-28 20:03:27.000000000</td><td>1                </td><td>2             </td><td>1.8899999856948853</td><td>-73.97189331054688</td><td>40.76285934448242 </td><td>1.0        </td><td>0.0                 </td><td>-73.95513153076172 </td><td>40.78596115112305 </td><td>7.5          </td><td>1.0        </td><td>0.5      </td><td>0.0               </td><td>0.0              </td><td>9.300000190734863 </td></tr>\n",
       "<tr><td>...                                </td><td>...        </td><td>...                          </td><td>...                          </td><td>...              </td><td>...           </td><td>...               </td><td>...               </td><td>...               </td><td>...        </td><td>...                 </td><td>...                </td><td>...               </td><td>...          </td><td>...        </td><td>...      </td><td>...               </td><td>...              </td><td>...               </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>299,997</i></td><td>CMT        </td><td>2015-06-18 09:05:52.000000000</td><td>2015-06-18 09:28:19.000000000</td><td>1                </td><td>1             </td><td>2.700000047683716 </td><td>-73.95230865478516</td><td>40.78091049194336 </td><td>1.0        </td><td>0.0                 </td><td>-73.97917175292969 </td><td>40.75542068481445 </td><td>15.0         </td><td>0.0        </td><td>0.5      </td><td>1.25              </td><td>0.0              </td><td>17.049999237060547</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>299,998</i></td><td>VTS        </td><td>2015-04-17 11:13:46.000000000</td><td>2015-04-17 11:33:19.000000000</td><td>1                </td><td>2             </td><td>1.75              </td><td>-73.95193481445312</td><td>40.77804183959961 </td><td>1.0        </td><td>0.0                 </td><td>-73.96920013427734 </td><td>40.76392364501953 </td><td>13.0         </td><td>0.0        </td><td>0.5      </td><td>0.0               </td><td>0.0              </td><td>13.800000190734863</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>299,999</i></td><td>VTS        </td><td>2015-05-29 07:00:45.000000000</td><td>2015-05-29 07:17:47.000000000</td><td>5                </td><td>2             </td><td>8.9399995803833   </td><td>-73.95345306396484</td><td>40.779319763183594</td><td>1.0        </td><td>0.0                 </td><td>-73.86701965332031 </td><td>40.770938873291016</td><td>26.0         </td><td>0.0        </td><td>0.5      </td><td>0.0               </td><td>5.539999961853027</td><td>32.34000015258789 </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "df = vaex.open('s3://vaex/taxi/nyc_taxi_2015_mini.hdf5?anon=true')\n",
    "df.head_and_tail_print(3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One can also use the `fs_options` to specify any arguments that need to be passed to an external file system if needed:\n",
    "\n",
    " - When using Amazon's S3:\n",
    "     - [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/filesystems.html#s3) - If supported by Arrow.\n",
    "     - [s3fs.core.S3FileSystem]('https://s3fs.readthedocs.io/en/latest/) - Used for globbing and fallbacks.\n",
    " - When using Google Cloud Storage:\n",
    "     - [gcsfs.core.GCSFileSystem](https://gcsfs.readthedocs.io/en/latest/)\n",
    "     \n",
    "For example:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.515329Z",
     "start_time": "2020-11-10T17:48:17.435950Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>vendor_id  </th><th>pickup_datetime              </th><th>dropoff_datetime             </th><th style=\"text-align: right;\">  passenger_count</th><th style=\"text-align: right;\">  payment_type</th><th style=\"text-align: right;\">  trip_distance</th><th style=\"text-align: right;\">  pickup_longitude</th><th style=\"text-align: right;\">  pickup_latitude</th><th style=\"text-align: right;\">  rate_code</th><th style=\"text-align: right;\">  store_and_fwd_flag</th><th style=\"text-align: right;\">  dropoff_longitude</th><th style=\"text-align: right;\">  dropoff_latitude</th><th style=\"text-align: right;\">  fare_amount</th><th style=\"text-align: right;\">  surcharge</th><th style=\"text-align: right;\">  mta_tax</th><th style=\"text-align: right;\">  tip_amount</th><th style=\"text-align: right;\">  tolls_amount</th><th style=\"text-align: right;\">  total_amount</th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>VTS        </td><td>2015-02-27 22:11:38.000000000</td><td>2015-02-27 22:22:51.000000000</td><td style=\"text-align: right;\">                5</td><td style=\"text-align: right;\">             1</td><td style=\"text-align: right;\">           2.26</td><td style=\"text-align: right;\">          -74.0066</td><td style=\"text-align: right;\">          40.7075</td><td style=\"text-align: right;\">          1</td><td style=\"text-align: right;\">                   0</td><td style=\"text-align: right;\">           -74.0096</td><td style=\"text-align: right;\">           40.7346</td><td style=\"text-align: right;\">         10  </td><td style=\"text-align: right;\">        0.5</td><td style=\"text-align: right;\">      0.5</td><td style=\"text-align: right;\">        2   </td><td style=\"text-align: right;\">             0</td><td style=\"text-align: right;\">         13.3 </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>VTS        </td><td>2015-08-04 00:36:01.000000000</td><td>2015-08-04 00:47:11.000000000</td><td style=\"text-align: right;\">                1</td><td style=\"text-align: right;\">             1</td><td style=\"text-align: right;\">           5.13</td><td style=\"text-align: right;\">          -74.0075</td><td style=\"text-align: right;\">          40.7052</td><td style=\"text-align: right;\">          1</td><td style=\"text-align: right;\">                   0</td><td style=\"text-align: right;\">           -73.9673</td><td style=\"text-align: right;\">           40.7552</td><td style=\"text-align: right;\">         16  </td><td style=\"text-align: right;\">        0.5</td><td style=\"text-align: right;\">      0.5</td><td style=\"text-align: right;\">        3.46</td><td style=\"text-align: right;\">             0</td><td style=\"text-align: right;\">         20.76</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>VTS        </td><td>2015-01-28 19:56:52.000000000</td><td>2015-01-28 20:03:27.000000000</td><td style=\"text-align: right;\">                1</td><td style=\"text-align: right;\">             2</td><td style=\"text-align: right;\">           1.89</td><td style=\"text-align: right;\">          -73.9719</td><td style=\"text-align: right;\">          40.7629</td><td style=\"text-align: right;\">          1</td><td style=\"text-align: right;\">                   0</td><td style=\"text-align: right;\">           -73.9551</td><td style=\"text-align: right;\">           40.786 </td><td style=\"text-align: right;\">          7.5</td><td style=\"text-align: right;\">        1  </td><td style=\"text-align: right;\">      0.5</td><td style=\"text-align: right;\">        0   </td><td style=\"text-align: right;\">             0</td><td style=\"text-align: right;\">          9.3 </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  vendor_id    pickup_datetime                dropoff_datetime                 passenger_count    payment_type    trip_distance    pickup_longitude    pickup_latitude    rate_code    store_and_fwd_flag    dropoff_longitude    dropoff_latitude    fare_amount    surcharge    mta_tax    tip_amount    tolls_amount    total_amount\n",
       "  0  VTS          2015-02-27 22:11:38.000000000  2015-02-27 22:22:51.000000000                  5               1             2.26            -74.0066            40.7075            1                     0             -74.0096             40.7346           10            0.5        0.5          2                  0           13.3\n",
       "  1  VTS          2015-08-04 00:36:01.000000000  2015-08-04 00:47:11.000000000                  1               1             5.13            -74.0075            40.7052            1                     0             -73.9673             40.7552           16            0.5        0.5          3.46               0           20.76\n",
       "  2  VTS          2015-01-28 19:56:52.000000000  2015-01-28 20:03:27.000000000                  1               2             1.89            -73.9719            40.7629            1                     0             -73.9551             40.786             7.5          1          0.5          0                  0            9.3"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = vaex.open('s3://vaex/taxi/nyc_taxi_2015_mini.hdf5', fs_options={'anon': True})\n",
    "df.head(3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-09T21:43:55.026986Z",
     "start_time": "2020-11-09T21:43:55.022781Z"
    }
   },
   "source": [
    "When streaming HDF5 files, `fs_options` also accepts the \"cache\" options. When `True`, as is the default, Vaex will lazily download and cache the data to the local machine. \"Lazily download\" means that Vaex will only download the portions of the data you really need. \n",
    "\n",
    "For example: imagine that we have a file hosted on S3 that has 100 columns and 1 billion rows. Getting a preview of the DataFrame via `print(df)` for instance will download only the first and last 5 rows. If we then proceed to make calculations or plots with only 5 columns, only the data from those columns will be downloaded and cached to the local machine.\n",
    "\n",
    "By default, the data streamed from S3 and GCS is cached at `$HOME/.vaex/file-cache/s3` and `$HOME/.vaex/file-cache/gs` respectively, and thus successive access is as fast as native disk access. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Streaming Apache Arrow and Apache Parquet is just as simple. Caching is available for these file formats, but using the Apache Arrow format will currently read all the data when opening the file, so less useful. For maximum performance, we always advise to use a compute instance at the same region as the bucket. \n",
    "\n",
    "Here is an example of reading an Apache Arrow file straight from Google Cloud Storage:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-09T18:57:38.092787Z",
     "start_time": "2020-11-09T18:57:38.090513Z"
    }
   },
   "source": [
    "```\n",
    "df = vaex.open('gs://vaex-data/airlines/us_airline_2019_mini.arrow', fs_options={'anon': True})\n",
    "df\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Apache Parquet files typically compressed, and therefore are often a better choice for cloud environments, since the tend to keep the storage and transfer costs lower. Here is an example of opening a Parquet file from Google Cloud Storage."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-09T21:22:32.728905Z",
     "start_time": "2020-11-09T21:22:32.725232Z"
    }
   },
   "source": [
    "```\n",
    "df = vaex.open('gs://vaex-data/airlines/us_airline_2019_mini.parquet', fs_options={'anon': True})\n",
    "df\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-09T18:22:28.468854Z",
     "start_time": "2020-11-09T18:22:28.461974Z"
    }
   },
   "source": [
    "The following table summarizes the current capabilities of Vaex to read, cache and write different file formats to Amazong S3 and Google Cloud Storage.\n",
    "\n",
    "| Format  | Read | Cache | Write |\n",
    "|-------- |------|-------|-------|\n",
    "| HDF5    |  Yes | Yes   | No    |\n",
    "| Arrow   |  Yes | No*   | Yes   |\n",
    "| Parquet |  Yes | No*   | Yes   |\n",
    "| FITS    |  Yes | No*   | Yes   |\n",
    "| CSV     |  ??? | ???   | ???   |\n",
    "\n",
    "No* - this is not available now, but should be possible in the future. Please contact [vaex.io](https://vaex.io/) for more information."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### In-memory data representations\n",
    "\n",
    "One can construct a Vaex DataFrame from a variety of in-memory data representations. Such a common operation is converting a pandas into a Vaex DataFrame. Let us read in a CSV file with pandas and than convert it to a Vaex DataFrame:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.556749Z",
     "start_time": "2020-11-10T17:48:17.521594Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>city</th>\n",
       "      <th>team</th>\n",
       "      <th>player</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Indianopolis</td>\n",
       "      <td>Pacers</td>\n",
       "      <td>Reggie Miller</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Chicago</td>\n",
       "      <td>Bulls</td>\n",
       "      <td>Michael Jordan</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Boston</td>\n",
       "      <td>Celtics</td>\n",
       "      <td>Larry Bird</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Minnesota</td>\n",
       "      <td>Timberwolves</td>\n",
       "      <td>Kevin Garnett</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Miami</td>\n",
       "      <td>Heat</td>\n",
       "      <td>Alonzo Mourning</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "           city          team           player\n",
       "0  Indianopolis        Pacers    Reggie Miller\n",
       "1       Chicago         Bulls   Michael Jordan\n",
       "2        Boston       Celtics       Larry Bird\n",
       "3     Minnesota  Timberwolves    Kevin Garnett\n",
       "4         Miami          Heat  Alonzo Mourning"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "pandas_df = pd.read_csv('./data/io/sample_nba_1.csv')\n",
    "pandas_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.586664Z",
     "start_time": "2020-11-10T17:48:17.562212Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>city          </th><th>team          </th><th>player           </th><th style=\"text-align: right;\">  index</th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;Indianopolis&#x27;</td><td>&#x27;Pacers&#x27;      </td><td>&#x27;Reggie Miller&#x27;  </td><td style=\"text-align: right;\">      0</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Chicago&#x27;     </td><td>&#x27;Bulls&#x27;       </td><td>&#x27;Michael Jordan&#x27; </td><td style=\"text-align: right;\">      1</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Boston&#x27;      </td><td>&#x27;Celtics&#x27;     </td><td>&#x27;Larry Bird&#x27;     </td><td style=\"text-align: right;\">      2</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Minnesota&#x27;   </td><td>&#x27;Timberwolves&#x27;</td><td>&#x27;Kevin Garnett&#x27;  </td><td style=\"text-align: right;\">      3</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i></td><td>&#x27;Miami&#x27;       </td><td>&#x27;Heat&#x27;        </td><td>&#x27;Alonzo Mourning&#x27;</td><td style=\"text-align: right;\">      4</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  city            team            player               index\n",
       "  0  'Indianopolis'  'Pacers'        'Reggie Miller'          0\n",
       "  1  'Chicago'       'Bulls'         'Michael Jordan'         1\n",
       "  2  'Boston'        'Celtics'       'Larry Bird'             2\n",
       "  3  'Minnesota'     'Timberwolves'  'Kevin Garnett'          3\n",
       "  4  'Miami'         'Heat'          'Alonzo Mourning'        4"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = vaex.from_pandas(df=pandas_df, copy_index=True)\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `copy_index` argument specifies whether the index column of a pandas DataFrame should be imported into the Vaex DataFrame. Converting a pandas into a Vaex DataFrame is particularly useful since pandas can read data from a large variety of file formats. For instance, we can use pandas to read data from a database, and then pass it to Vaex like so:\n",
    "\n",
    "```\n",
    "import vaex\n",
    "import pandas as pd\n",
    "import sqlalchemy\n",
    "\n",
    "connection_string = 'postgresql://readonly:' + 'my_password' + '@server.company.com:1234/database_name'\n",
    "engine = sqlalchemy.create_engine(connection_string)\n",
    "\n",
    "pandas_df = pd.read_sql_query('SELECT * FROM MYTABLE', con=engine)\n",
    "df = vaex.from_pandas(pandas_df, copy_index=False)\n",
    "```\n",
    "\n",
    "Another example is using pandas to read in [SAS](https://www.sas.com/en_us/home.html) files:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.644418Z",
     "start_time": "2020-11-10T17:48:17.590253Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                             </th><th>YEAR  </th><th>Y                 </th><th>W                  </th><th>R                  </th><th>L                 </th><th>K                 </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i> </td><td>1948.0</td><td>1.2139999866485596</td><td>0.24300000071525574</td><td>0.1454000025987625 </td><td>1.4149999618530273</td><td>0.6119999885559082</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i> </td><td>1949.0</td><td>1.3539999723434448</td><td>0.25999999046325684</td><td>0.21809999644756317</td><td>1.3839999437332153</td><td>0.5590000152587891</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i> </td><td>1950.0</td><td>1.569000005722046 </td><td>0.27799999713897705</td><td>0.3156999945640564 </td><td>1.3880000114440918</td><td>0.5730000138282776</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i> </td><td>1951.0</td><td>1.9479999542236328</td><td>0.296999990940094  </td><td>0.39399999380111694</td><td>1.5499999523162842</td><td>0.5640000104904175</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i> </td><td>1952.0</td><td>2.265000104904175 </td><td>0.3100000023841858 </td><td>0.35589998960494995</td><td>1.8020000457763672</td><td>0.5740000009536743</td></tr>\n",
       "<tr><td>...                           </td><td>...   </td><td>...               </td><td>...                </td><td>...                </td><td>...               </td><td>...               </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>27</i></td><td>1975.0</td><td>18.72100067138672 </td><td>1.246999979019165  </td><td>0.23010000586509705</td><td>5.7220001220703125</td><td>9.062000274658203 </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>28</i></td><td>1976.0</td><td>19.25             </td><td>1.375              </td><td>0.3452000021934509 </td><td>5.76200008392334  </td><td>8.26200008392334  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>29</i></td><td>1977.0</td><td>20.64699935913086 </td><td>1.5440000295639038 </td><td>0.45080000162124634</td><td>5.876999855041504 </td><td>7.473999977111816 </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>30</i></td><td>1978.0</td><td>22.72599983215332 </td><td>1.7029999494552612 </td><td>0.5877000093460083 </td><td>6.107999801635742 </td><td>7.104000091552734 </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>31</i></td><td>1979.0</td><td>23.618999481201172</td><td>1.7790000438690186 </td><td>0.534600019454956  </td><td>6.8520002365112305</td><td>6.874000072479248 </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "#    YEAR    Y                   W                    R                    L                   K\n",
       "0    1948.0  1.2139999866485596  0.24300000071525574  0.1454000025987625   1.4149999618530273  0.6119999885559082\n",
       "1    1949.0  1.3539999723434448  0.25999999046325684  0.21809999644756317  1.3839999437332153  0.5590000152587891\n",
       "2    1950.0  1.569000005722046   0.27799999713897705  0.3156999945640564   1.3880000114440918  0.5730000138282776\n",
       "3    1951.0  1.9479999542236328  0.296999990940094    0.39399999380111694  1.5499999523162842  0.5640000104904175\n",
       "4    1952.0  2.265000104904175   0.3100000023841858   0.35589998960494995  1.8020000457763672  0.5740000009536743\n",
       "...  ...     ...                 ...                  ...                  ...                 ...\n",
       "27   1975.0  18.72100067138672   1.246999979019165    0.23010000586509705  5.7220001220703125  9.062000274658203\n",
       "28   1976.0  19.25               1.375                0.3452000021934509   5.76200008392334    8.26200008392334\n",
       "29   1977.0  20.64699935913086   1.5440000295639038   0.45080000162124634  5.876999855041504   7.473999977111816\n",
       "30   1978.0  22.72599983215332   1.7029999494552612   0.5877000093460083   6.107999801635742   7.104000091552734\n",
       "31   1979.0  23.618999481201172  1.7790000438690186   0.534600019454956    6.8520002365112305  6.874000072479248"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pandas_df = pd.read_sas('./data/io/sample_airline.sas7bdat')\n",
    "df = vaex.from_pandas(pandas_df, copy_index=False)\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One can read in an arrow table as a Vaex DataFrame in a similar manner. Let us first use pyarrow to read in a CSV file as an arrow table."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.668860Z",
     "start_time": "2020-11-10T17:48:17.651785Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "pyarrow.Table\n",
       "city: string\n",
       "team: string\n",
       "player: string"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pyarrow.csv\n",
    "\n",
    "arrow_table = pyarrow.csv.read_csv('./data/io/sample_nba_1.csv')\n",
    "arrow_table"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once we have the arrow table, converting it to a DataFrame is simple:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.693084Z",
     "start_time": "2020-11-10T17:48:17.674499Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>city          </th><th>team          </th><th>player           </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;Indianopolis&#x27;</td><td>&#x27;Pacers&#x27;      </td><td>&#x27;Reggie Miller&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Chicago&#x27;     </td><td>&#x27;Bulls&#x27;       </td><td>&#x27;Michael Jordan&#x27; </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Boston&#x27;      </td><td>&#x27;Celtics&#x27;     </td><td>&#x27;Larry Bird&#x27;     </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Minnesota&#x27;   </td><td>&#x27;Timberwolves&#x27;</td><td>&#x27;Kevin Garnett&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i></td><td>&#x27;Miami&#x27;       </td><td>&#x27;Heat&#x27;        </td><td>&#x27;Alonzo Mourning&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  city            team            player\n",
       "  0  'Indianopolis'  'Pacers'        'Reggie Miller'\n",
       "  1  'Chicago'       'Bulls'         'Michael Jordan'\n",
       "  2  'Boston'        'Celtics'       'Larry Bird'\n",
       "  3  'Minnesota'     'Timberwolves'  'Kevin Garnett'\n",
       "  4  'Miami'         'Heat'          'Alonzo Mourning'"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = vaex.from_arrow_table(arrow_table)\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It also common to construct a Vaex DataFrame from numpy arrays. That can be done like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.708273Z",
     "start_time": "2020-11-10T17:48:17.696048Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th style=\"text-align: right;\">  x</th><th style=\"text-align: right;\">  y</th><th>z    </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td style=\"text-align: right;\">  0</td><td style=\"text-align: right;\"> 10</td><td>&#x27;dog&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td style=\"text-align: right;\">  1</td><td style=\"text-align: right;\"> 20</td><td>&#x27;cat&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #    x    y  z\n",
       "  0    0   10  'dog'\n",
       "  1    1   20  'cat'"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "x = np.arange(2)\n",
    "y = np.array([10, 20])\n",
    "z = np.array(['dog', 'cat'])\n",
    "\n",
    "\n",
    "df_numpy = vaex.from_arrays(x=x, y=y, z=z)\n",
    "df_numpy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Constructing a DataFrame from a Python dict is also straight-forward:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.735383Z",
     "start_time": "2020-11-10T17:48:17.720439Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th style=\"text-align: right;\">  x</th><th style=\"text-align: right;\">  y</th><th>z      </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td style=\"text-align: right;\">  2</td><td style=\"text-align: right;\"> 30</td><td>&#x27;cow&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td style=\"text-align: right;\">  3</td><td style=\"text-align: right;\"> 40</td><td>&#x27;horse&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #    x    y  z\n",
       "  0    2   30  'cow'\n",
       "  1    3   40  'horse'"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Construct a DataFrame from Python dictionary\n",
    "data_dict = dict(x=[2, 3], y=[30, 40], z=['cow', 'horse'])\n",
    "\n",
    "df_dict = vaex.from_dict(data_dict)\n",
    "df_dict"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At times, one may need to create a single row DataFrame. Vaex has a convenience method which takes individual elements (scalars) and creates the DataFrame:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.761637Z",
     "start_time": "2020-11-10T17:48:17.737562Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th style=\"text-align: right;\">  x</th><th style=\"text-align: right;\">  y</th><th>z      </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td style=\"text-align: right;\">  4</td><td style=\"text-align: right;\"> 50</td><td>&#x27;mouse&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #    x    y  z\n",
       "  0    4   50  'mouse'"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_single_row = vaex.from_scalars(x=4, y=50, z='mouse')\n",
    "df_single_row"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, we can choose to concatenate different DataFrames, without any memory penalties like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.782004Z",
     "start_time": "2020-11-10T17:48:17.766225Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th style=\"text-align: right;\">  x</th><th style=\"text-align: right;\">  y</th><th>z      </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td style=\"text-align: right;\">  0</td><td style=\"text-align: right;\"> 10</td><td>&#x27;dog&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td style=\"text-align: right;\">  1</td><td style=\"text-align: right;\"> 20</td><td>&#x27;cat&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td style=\"text-align: right;\">  2</td><td style=\"text-align: right;\"> 30</td><td>&#x27;cow&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td style=\"text-align: right;\">  3</td><td style=\"text-align: right;\"> 40</td><td>&#x27;horse&#x27;</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i></td><td style=\"text-align: right;\">  4</td><td style=\"text-align: right;\"> 50</td><td>&#x27;mouse&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #    x    y  z\n",
       "  0    0   10  'dog'\n",
       "  1    1   20  'cat'\n",
       "  2    2   30  'cow'\n",
       "  3    3   40  'horse'\n",
       "  4    4   50  'mouse'"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = vaex.concat([df_numpy, df_dict, df_single_row])\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Extras\n",
    "\n",
    "Vaex allows you to make alias to the locations of your most used datasets. They can be local or in the cloud:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.804762Z",
     "start_time": "2020-11-10T17:48:17.788385Z"
    }
   },
   "outputs": [],
   "source": [
    "vaex.aliases['nba'] = './data/io/sample_nba_1.csv'\n",
    "vaex.aliases['nyc_taxi_aws'] = 's3://vaex/taxi/nyc_taxi_2015_mini.hdf5?anon=true'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.828285Z",
     "start_time": "2020-11-10T17:48:17.810661Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                            </th><th>city          </th><th>team          </th><th>player           </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i></td><td>&#x27;Indianopolis&#x27;</td><td>&#x27;Pacers&#x27;      </td><td>&#x27;Reggie Miller&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i></td><td>&#x27;Chicago&#x27;     </td><td>&#x27;Bulls&#x27;       </td><td>&#x27;Michael Jordan&#x27; </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i></td><td>&#x27;Boston&#x27;      </td><td>&#x27;Celtics&#x27;     </td><td>&#x27;Larry Bird&#x27;     </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>3</i></td><td>&#x27;Minnesota&#x27;   </td><td>&#x27;Timberwolves&#x27;</td><td>&#x27;Kevin Garnett&#x27;  </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>4</i></td><td>&#x27;Miami&#x27;       </td><td>&#x27;Heat&#x27;        </td><td>&#x27;Alonzo Mourning&#x27;</td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "  #  city            team            player\n",
       "  0  'Indianopolis'  'Pacers'        'Reggie Miller'\n",
       "  1  'Chicago'       'Bulls'         'Michael Jordan'\n",
       "  2  'Boston'        'Celtics'       'Larry Bird'\n",
       "  3  'Minnesota'     'Timberwolves'  'Kevin Garnett'\n",
       "  4  'Miami'         'Heat'          'Alonzo Mourning'"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = vaex.open('nba')\n",
    "df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:17.897101Z",
     "start_time": "2020-11-10T17:48:17.835391Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<thead>\n",
       "<tr><th>#                                  </th><th>vendor_id  </th><th>pickup_datetime              </th><th>dropoff_datetime             </th><th>passenger_count  </th><th>payment_type  </th><th>trip_distance     </th><th>pickup_longitude  </th><th>pickup_latitude   </th><th>rate_code  </th><th>store_and_fwd_flag  </th><th>dropoff_longitude  </th><th>dropoff_latitude  </th><th>fare_amount  </th><th>surcharge  </th><th>mta_tax  </th><th>tip_amount        </th><th>tolls_amount     </th><th>total_amount      </th></tr>\n",
       "</thead>\n",
       "<tbody>\n",
       "<tr><td><i style='opacity: 0.6'>0</i>      </td><td>VTS        </td><td>2015-02-27 22:11:38.000000000</td><td>2015-02-27 22:22:51.000000000</td><td>5                </td><td>1             </td><td>2.259999990463257 </td><td>-74.00664520263672</td><td>40.707496643066406</td><td>1.0        </td><td>0.0                 </td><td>-74.00959777832031 </td><td>40.734619140625   </td><td>10.0         </td><td>0.5        </td><td>0.5      </td><td>2.0               </td><td>0.0              </td><td>13.300000190734863</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>1</i>      </td><td>VTS        </td><td>2015-08-04 00:36:01.000000000</td><td>2015-08-04 00:47:11.000000000</td><td>1                </td><td>1             </td><td>5.130000114440918 </td><td>-74.0074691772461 </td><td>40.70523452758789 </td><td>1.0        </td><td>0.0                 </td><td>-73.96726989746094 </td><td>40.75519561767578 </td><td>16.0         </td><td>0.5        </td><td>0.5      </td><td>3.4600000381469727</td><td>0.0              </td><td>20.760000228881836</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>2</i>      </td><td>VTS        </td><td>2015-01-28 19:56:52.000000000</td><td>2015-01-28 20:03:27.000000000</td><td>1                </td><td>2             </td><td>1.8899999856948853</td><td>-73.97189331054688</td><td>40.76285934448242 </td><td>1.0        </td><td>0.0                 </td><td>-73.95513153076172 </td><td>40.78596115112305 </td><td>7.5          </td><td>1.0        </td><td>0.5      </td><td>0.0               </td><td>0.0              </td><td>9.300000190734863 </td></tr>\n",
       "<tr><td>...                                </td><td>...        </td><td>...                          </td><td>...                          </td><td>...              </td><td>...           </td><td>...               </td><td>...               </td><td>...               </td><td>...        </td><td>...                 </td><td>...                </td><td>...               </td><td>...          </td><td>...        </td><td>...      </td><td>...               </td><td>...              </td><td>...               </td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>299,997</i></td><td>CMT        </td><td>2015-06-18 09:05:52.000000000</td><td>2015-06-18 09:28:19.000000000</td><td>1                </td><td>1             </td><td>2.700000047683716 </td><td>-73.95230865478516</td><td>40.78091049194336 </td><td>1.0        </td><td>0.0                 </td><td>-73.97917175292969 </td><td>40.75542068481445 </td><td>15.0         </td><td>0.0        </td><td>0.5      </td><td>1.25              </td><td>0.0              </td><td>17.049999237060547</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>299,998</i></td><td>VTS        </td><td>2015-04-17 11:13:46.000000000</td><td>2015-04-17 11:33:19.000000000</td><td>1                </td><td>2             </td><td>1.75              </td><td>-73.95193481445312</td><td>40.77804183959961 </td><td>1.0        </td><td>0.0                 </td><td>-73.96920013427734 </td><td>40.76392364501953 </td><td>13.0         </td><td>0.0        </td><td>0.5      </td><td>0.0               </td><td>0.0              </td><td>13.800000190734863</td></tr>\n",
       "<tr><td><i style='opacity: 0.6'>299,999</i></td><td>VTS        </td><td>2015-05-29 07:00:45.000000000</td><td>2015-05-29 07:17:47.000000000</td><td>5                </td><td>2             </td><td>8.9399995803833   </td><td>-73.95345306396484</td><td>40.779319763183594</td><td>1.0        </td><td>0.0                 </td><td>-73.86701965332031 </td><td>40.770938873291016</td><td>26.0         </td><td>0.0        </td><td>0.5      </td><td>0.0               </td><td>5.539999961853027</td><td>32.34000015258789 </td></tr>\n",
       "</tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "df = vaex.open('nyc_taxi_aws')\n",
    "df.head_and_tail_print(3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data export\n",
    "\n",
    "One can export Vaex DataFrames to multiple file or in-memory data representations:\n",
    "\n",
    " - Binary file formats:\n",
    " \n",
    "     - [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format#HDF5)\n",
    "     - [Apache Arrow](https://arrow.apache.org/)\n",
    "     - [Apache Parquet](https://parquet.apache.org/)\n",
    "     - [FITS](https://en.wikipedia.org/wiki/FITS)\n",
    "     \n",
    " - Text based file formats:\n",
    " \n",
    "     - [CSV](https://en.wikipedia.org/wiki/Comma-separated_values)\n",
    "     - [ASCII](https://en.wikipedia.org/wiki/Text_file)\n",
    "     \n",
    " - In-memory data representations:\n",
    "\n",
    "    - DataFrames:\n",
    "    \n",
    "         - [panads](https://pandas.pydata.org/) DataFrame\n",
    "         - [Apache Arrow](https://arrow.apache.org/) Table\n",
    "         - [numpy](https://numpy.org/) arrays\n",
    "         - [Dask](https://dask.org/) arrays\n",
    "         - Python dictionaries\n",
    "         - Python items list ( a list of ('column_name', data) tuples)\n",
    "\n",
    "    - Expressions:\n",
    "    \n",
    "         - [panads](https://pandas.pydata.org/) Series\n",
    "         - [numpy](https://numpy.org/) array\n",
    "         - [Dask](https://dask.org/) array\n",
    "         - Python list"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Binary file formats\n",
    "\n",
    "The most efficient way to store data on disk when you work with Vaex is to use binary file formats. Vaex can export a DataFrame to HDF5, Apache Arrow, Apache Parquet and FITS:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:18.770260Z",
     "start_time": "2020-11-10T17:48:17.899526Z"
    }
   },
   "outputs": [],
   "source": [
    "df.export_hdf5('./data/io/output_data.hdf5')\n",
    "df.export_arrow('./data/io/output_data.arrow')\n",
    "df.export_parquet('./data/io/output_data.parquet')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Alternatively, one can simply use:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:19.633015Z",
     "start_time": "2020-11-10T17:48:18.772149Z"
    }
   },
   "outputs": [],
   "source": [
    "df.export('./data/io/output_data.hdf5')\n",
    "df.export('./data/io/output_data.arrow')\n",
    "df.export('./data/io/output_data.parquet')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "where Vaex will determine the file format of the based on the specified extension of the file name. If the extension is not recognized, an exception will be raised. \n",
    "\n",
    "When exporting to Apache Arrow and Apache Parquet file format, the data is written in chunks thus enabling to export of data that does not fit in RAM all at once. A custom chunk size can be specified via the `chunk_size` argument, the default value of which is `1048576`. For example:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:20.350048Z",
     "start_time": "2020-11-10T17:48:19.637434Z"
    }
   },
   "outputs": [],
   "source": [
    "df.export('./data/io/output_data.parquet', chunk_size=10_000)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Vaex supports direct writing to Amazon's S3 and Google Cloud Storage buckets when exporting the data to Apache Arrow and Apache Parquet file formats. Much like when opening a file, the `fs_options` dictionary can be specified to pass arguments to the underlying file system, for example authentication credentials. Here are two examples:\n",
    "\n",
    "```\n",
    "# Export to Google Cloud Storage\n",
    "df.export_arrow(to='gs://my-gs-bucket/my_data.arrow', fs_options={'token': my_token})\n",
    "\n",
    "# Export to Amazon's S3\n",
    "df.export_parquet(to='s3://my-s3-bucket/my_data.parquet', fs_options={'access_key': my_key, 'secret_key': my_secret_key})\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Text based file format\n",
    "\n",
    "At times, it may be useful to export the data to disk in a text based file format such as CSV. In that case one can simply do:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.083429Z",
     "start_time": "2020-11-10T17:48:20.352188Z"
    }
   },
   "outputs": [],
   "source": [
    "df.export_csv('./data/io/output_data.csv')  # `chunk_size` has a default value of 1_000_000"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `df.export_csv` method is using `pandas_df.to_csv` behind the scenes, and thus one can pass any argument to `df.export_csv` as would to `pandas_df.to_csv`. The data is exported in chunks and the size of those chunks can be specified by the `chunk_size` argument in `df.export_csv`. In this way, data that is too large to fit in RAM can be saved to disk."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Export to multiple files in parallel\n",
    "\n",
    "With the `export_many` method one can export a DataFrame to muliple files of the same type in parallel. This is likely to be more performant when exporting very large DataFrames to the cloud compared to writing a single large Arrow of Parquet file, where each chunk is written in succession. The method also accepts the `fs_options` dictonary, and can be particularly convenient when exporting to cloud storage."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.298443Z",
     "start_time": "2020-11-10T17:48:34.085408Z"
    }
   },
   "outputs": [],
   "source": [
    "df.export_many('./data/io/output_chunk-{i:02}.parquet', chunk_size=100_000)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.428657Z",
     "start_time": "2020-11-10T17:48:34.300470Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./data/io/output_chunk-00.parquet ./data/io/output_chunk-02.parquet\r\n",
      "./data/io/output_chunk-01.parquet\r\n"
     ]
    }
   ],
   "source": [
    "!ls ./data/io/output_chunk*.parquet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### In memory data representation\n",
    "\n",
    "Python has a rich ecosystem comprised of various libraries for data manipulation, that offer different functionality. Thus, it is often useful to be able to pass data from one library to another. Vaex is able to pass on its data to other libraries via a number of in-memory representations.\n",
    "\n",
    "#### DataFrame representations\n",
    "\n",
    "A Vaex DataFrame can be converted to a pandas DataFrame like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.468855Z",
     "start_time": "2020-11-10T17:48:34.431865Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>x</th>\n",
       "      <th>y</th>\n",
       "      <th>z</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>10</td>\n",
       "      <td>dog</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>20</td>\n",
       "      <td>cat</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>2</td>\n",
       "      <td>30</td>\n",
       "      <td>cow</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>3</td>\n",
       "      <td>40</td>\n",
       "      <td>horse</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>4</td>\n",
       "      <td>50</td>\n",
       "      <td>mouse</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   x   y      z\n",
       "0  0  10    dog\n",
       "1  1  20    cat\n",
       "2  2  30    cow\n",
       "3  3  40  horse\n",
       "4  4  50  mouse"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df = vaex.open('./data/io/sample_simple.hdf5')\n",
    "pandas_df = df.to_pandas_df()\n",
    "pandas_df  # looks the same doesn't it?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For DataFrames that are too large to fit in memory, one can specify the `chunk_size` argument, in which case the `to_pandas_df`method returns a generator yileding a pandas DataFrame with as many rows as indicated by the `chunk_size` argument:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.499709Z",
     "start_time": "2020-11-10T17:48:34.470683Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 3\n",
      "   x   y    z\n",
      "0  0  10  dog\n",
      "1  1  20  cat\n",
      "2  2  30  cow\n",
      "\n",
      "3 5\n",
      "   x   y      z\n",
      "0  3  40  horse\n",
      "1  4  50  mouse\n",
      "\n"
     ]
    }
   ],
   "source": [
    "gen = df.to_pandas_df(chunk_size=3)\n",
    "\n",
    "for i1, i2, chunk in gen:\n",
    "    print(i1, i2)\n",
    "    print(chunk)\n",
    "    print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-04-29T10:27:22.706846Z",
     "start_time": "2020-04-29T10:27:22.702412Z"
    }
   },
   "source": [
    "The generator also yields the row number of the first and the last element of that chunk, so we know exactly where in the parent DataFrame we are. The following DataFrame methods also support the `chunk_size` argument with the same behaviour.\n",
    "\n",
    "Converting a Vaex DataFrame into an arrow table is similar:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.526820Z",
     "start_time": "2020-11-10T17:48:34.502901Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "pyarrow.Table\n",
       "x: int64\n",
       "y: int64\n",
       "z: string"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "arrow_table = df.to_arrow_table()\n",
    "arrow_table"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One can simply convert the DataFrame to a list of arrays. By default, the data is exposed as a list of numpy or arrow arrays:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.550762Z",
     "start_time": "2020-11-10T17:48:34.534532Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[array([0, 1, 2, 3, 4]),\n",
       " array([10, 20, 30, 40, 50]),\n",
       " <pyarrow.lib.StringArray object at 0x144d847c0>\n",
       " [\n",
       "   \"dog\",\n",
       "   \"cat\",\n",
       "   \"cow\",\n",
       "   \"horse\",\n",
       "   \"mouse\"\n",
       " ]]"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "arrays = df.to_arrays()\n",
    "arrays"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "By specifying the `array_type` argument, one can choose whether the data will be represented by numpy arrays, xarrays, or Python lists."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.678374Z",
     "start_time": "2020-11-10T17:48:34.555545Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<xarray.DataArray (dim_0: 5)>\n",
       " array([0, 1, 2, 3, 4])\n",
       " Dimensions without coordinates: dim_0,\n",
       " <xarray.DataArray (dim_0: 5)>\n",
       " array([10, 20, 30, 40, 50])\n",
       " Dimensions without coordinates: dim_0,\n",
       " <xarray.DataArray (dim_0: 5)>\n",
       " array(['dog', 'cat', 'cow', 'horse', 'mouse'], dtype=object)\n",
       " Dimensions without coordinates: dim_0]"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "arrays = df.to_arrays(array_type='xarray')\n",
    "arrays  # list of xarrays"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.689015Z",
     "start_time": "2020-11-10T17:48:34.681424Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[0, 1, 2, 3, 4],\n",
       " [10, 20, 30, 40, 50],\n",
       " ['dog', 'cat', 'cow', 'horse', 'mouse']]"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "arrays = df.to_arrays(array_type='list')\n",
    "arrays  # list of lists"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Keeping it close to pure Python, one can export a Vaex DataFrame as a dictionary. The same `array_type` keyword argument applies here as well:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.702347Z",
     "start_time": "2020-11-10T17:48:34.695436Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'x': array([0, 1, 2, 3, 4]),\n",
       " 'y': array([10, 20, 30, 40, 50]),\n",
       " 'z': array(['dog', 'cat', 'cow', 'horse', 'mouse'], dtype=object)}"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "d_dict = df.to_dict(array_type='numpy')\n",
    "d_dict"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Alternatively, one can also convert a DataFrame to a list of tuples, were the first element of the tuple is the column name, while the second element is the array representation of the data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.728656Z",
     "start_time": "2020-11-10T17:48:34.710991Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('x', [0, 1, 2, 3, 4]),\n",
       " ('y', [10, 20, 30, 40, 50]),\n",
       " ('z', ['dog', 'cat', 'cow', 'horse', 'mouse'])]"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Get a single item list\n",
    "items = df.to_items(array_type='list')\n",
    "items"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As mentioned earlier, with all of the above example, one can use the `chunk_size` argument which creates a generator, yielding a portion of the DataFrame in the specified format. In the case of `.to_dict` method:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.750188Z",
     "start_time": "2020-11-10T17:48:34.733209Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0 2 {'x': [0, 1], 'y': [10, 20], 'z': ['dog', 'cat']}\n",
      "2 4 {'x': [2, 3], 'y': [30, 40], 'z': ['cow', 'horse']}\n",
      "4 5 {'x': [4], 'y': [50], 'z': ['mouse']}\n"
     ]
    }
   ],
   "source": [
    "gen = df.to_dict(array_type='list', chunk_size=2)\n",
    "\n",
    "for i1, i2, chunk in gen:\n",
    "    print(i1, i2, chunk)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Last but not least, a Vaex DataFrame can be lazily exposed as a Dask array:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.786799Z",
     "start_time": "2020-11-10T17:48:34.757322Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<tr>\n",
       "<td>\n",
       "<table>\n",
       "  <thead>\n",
       "    <tr><td> </td><th> Array </th><th> Chunk </th></tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr><th> Bytes </th><td> 80 B </td> <td> 80 B </td></tr>\n",
       "    <tr><th> Shape </th><td> (5, 2) </td> <td> (5, 2) </td></tr>\n",
       "    <tr><th> Count </th><td> 2 Tasks </td><td> 1 Chunks </td></tr>\n",
       "    <tr><th> Type </th><td> int64 </td><td> numpy.ndarray </td></tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</td>\n",
       "<td>\n",
       "<svg width=\"98\" height=\"170\" style=\"stroke:rgb(0,0,0);stroke-width:1\" >\n",
       "\n",
       "  <!-- Horizontal lines -->\n",
       "  <line x1=\"0\" y1=\"0\" x2=\"48\" y2=\"0\" style=\"stroke-width:2\" />\n",
       "  <line x1=\"0\" y1=\"120\" x2=\"48\" y2=\"120\" style=\"stroke-width:2\" />\n",
       "\n",
       "  <!-- Vertical lines -->\n",
       "  <line x1=\"0\" y1=\"0\" x2=\"0\" y2=\"120\" style=\"stroke-width:2\" />\n",
       "  <line x1=\"48\" y1=\"0\" x2=\"48\" y2=\"120\" style=\"stroke-width:2\" />\n",
       "\n",
       "  <!-- Colored Rectangle -->\n",
       "  <polygon points=\"0.0,0.0 48.0,0.0 48.0,120.0 0.0,120.0\" style=\"fill:#ECB172A0;stroke-width:0\"/>\n",
       "\n",
       "  <!-- Text -->\n",
       "  <text x=\"24.000000\" y=\"140.000000\" font-size=\"1.0rem\" font-weight=\"100\" text-anchor=\"middle\" >2</text>\n",
       "  <text x=\"68.000000\" y=\"60.000000\" font-size=\"1.0rem\" font-weight=\"100\" text-anchor=\"middle\" transform=\"rotate(0,68.000000,60.000000)\">5</text>\n",
       "</svg>\n",
       "</td>\n",
       "</tr>\n",
       "</table>"
      ],
      "text/plain": [
       "dask.array<vaex-df-f45dae22-237c-11eb-b5e2, shape=(5, 2), dtype=int64, chunksize=(5, 2), chunktype=numpy.ndarray>"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dask_arrays = df[['x', 'y']].to_dask_array()   # String support coming soon\n",
    "dask_arrays"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Expression representations\n",
    "\n",
    "A single Vaex Expression can be also converted to a variety of in-memory representations:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.795858Z",
     "start_time": "2020-11-10T17:48:34.789839Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0    0\n",
       "1    1\n",
       "2    2\n",
       "3    3\n",
       "4    4\n",
       "dtype: int64"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# pandas Series\n",
    "x_series = df.x.to_pandas_series()\n",
    "x_series"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.823953Z",
     "start_time": "2020-11-10T17:48:34.802616Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0, 1, 2, 3, 4])"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# numpy array\n",
    "x_numpy = df.x.to_numpy()\n",
    "x_numpy"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.842045Z",
     "start_time": "2020-11-10T17:48:34.827482Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[0, 1, 2, 3, 4]"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Python list\n",
    "x_list = df.x.tolist()\n",
    "x_list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2020-11-10T17:48:34.864532Z",
     "start_time": "2020-11-10T17:48:34.844773Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<tr>\n",
       "<td>\n",
       "<table>\n",
       "  <thead>\n",
       "    <tr><td> </td><th> Array </th><th> Chunk </th></tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr><th> Bytes </th><td> 40 B </td> <td> 40 B </td></tr>\n",
       "    <tr><th> Shape </th><td> (5,) </td> <td> (5,) </td></tr>\n",
       "    <tr><th> Count </th><td> 2 Tasks </td><td> 1 Chunks </td></tr>\n",
       "    <tr><th> Type </th><td> int64 </td><td> numpy.ndarray </td></tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</td>\n",
       "<td>\n",
       "<svg width=\"170\" height=\"92\" style=\"stroke:rgb(0,0,0);stroke-width:1\" >\n",
       "\n",
       "  <!-- Horizontal lines -->\n",
       "  <line x1=\"0\" y1=\"0\" x2=\"120\" y2=\"0\" style=\"stroke-width:2\" />\n",
       "  <line x1=\"0\" y1=\"42\" x2=\"120\" y2=\"42\" style=\"stroke-width:2\" />\n",
       "\n",
       "  <!-- Vertical lines -->\n",
       "  <line x1=\"0\" y1=\"0\" x2=\"0\" y2=\"42\" style=\"stroke-width:2\" />\n",
       "  <line x1=\"120\" y1=\"0\" x2=\"120\" y2=\"42\" style=\"stroke-width:2\" />\n",
       "\n",
       "  <!-- Colored Rectangle -->\n",
       "  <polygon points=\"0.0,0.0 120.0,0.0 120.0,42.00989029700999 0.0,42.00989029700999\" style=\"fill:#ECB172A0;stroke-width:0\"/>\n",
       "\n",
       "  <!-- Text -->\n",
       "  <text x=\"60.000000\" y=\"62.009890\" font-size=\"1.0rem\" font-weight=\"100\" text-anchor=\"middle\" >5</text>\n",
       "  <text x=\"140.000000\" y=\"21.004945\" font-size=\"1.0rem\" font-weight=\"100\" text-anchor=\"middle\" transform=\"rotate(0,140.000000,21.004945)\">1</text>\n",
       "</svg>\n",
       "</td>\n",
       "</tr>\n",
       "</table>"
      ],
      "text/plain": [
       "dask.array<vaex-expression-f46bc052-237c-11eb-b5e2, shape=(5,), dtype=int64, chunksize=(5,), chunktype=numpy.ndarray>"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Dask array\n",
    "x_dask_array = df.x.to_dask_array()\n",
    "x_dask_array"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.7"
  },
  "widgets": {
   "application/vnd.jupyter.widget-state+json": {
    "state": {},
    "version_major": 2,
    "version_minor": 0
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
