{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# HudiOnHops\n",
    "\n",
    "In this notebook we will introduce the Apache Hudi storage abstraction/library (http://hudi.apache.org/) for doing **incremental** data ingestion to data lakes stored on Hops (e.g a Hopsworks Feature Store).\n",
    "\n",
    "TLDR; Hudi is a storage abstraction/library build on top of Spark. A Hudi dataset stores data in Parquet files and maintains additional metadata to make upserts efficient. A Hudi ingest job is intended to be run as a streaming ingest job, on an interval such as every 15 minutes, reading deltas from a message-bus like Kafka and ingesting the deltas **incrementally** into a data lake.\n",
    "\n",
    "![Incremental ETL](./../images/incr_load.png \"Incremetal ETL\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Background"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Motivation\n",
    "\n",
    "Hudi is an open-source library for doing incremental ingestion of data for large analytical datasets stored on distributed file systems. The library was originally developed at Uber to improve their data latency, but  it is now an Apache project.\n",
    "\n",
    "The main motivation for Hudi is that it reduces the **data latency** for ingesting large datasets into data lakes. Traditional ETL typically involves taking a snapshot of a production database and doing a full load into a data lake (typically stored on a distributed file system). Using the snapshot approach for ETL is simple since the snapshot is immutable and can be loaded as an atomic unit into the data lake. However, the con of taking this approach to doing data ingestion is that it is *slow*. Even if just a single record have been updated since the last data ingestion, the entire table has to be re-written. If you are working with Big Data (TB or PB size datasets) then this will introduce significant *data latency* (up to 24 hours in Uber's case) and *wasted resources* (majority of the writes when ingesting the snapshot is redundant as most of the records have not been updated since the last ETL step). \n",
    "\n",
    "This motivates the use-case for **incremental** data ingestion. Incremental data ingestion means that only deltas/changelogs since the last ingestion are inserted. \n",
    "\n",
    "Incremental ingestion lies in-between traditional batch ingestion and the streaming use-case. It can provide data latency as low as *minutes* for petabyte-scale datasets. The incremental mode for processing introduces new trade-offs compared to streaming and batch. It has lower data latency than traditional batch processing, but a slightly higher latency than stream processing. Why not go full-streaming instead of the incremental processing? It boils down to your requirements and trade-offs. If you need data latency in the order of seconds, then you have to use stream processing (e.g fraud detection). However if your business can do with data latency in the order of say 5 minutes (applications which are fine with this latency could be feature engineering pipelines, building dashboards, or doing near-real-time analytics), then incremental processing really shines. \n",
    "\n",
    "With incremental processing, you process data in *mini-batches* and run the spark job frequently, every 15 minutes or so. By using mini-batches rather than record-by-record streaming, the incremental model makes better use of resources and makes it easier to do complex processing and joins which are more suited for the batch-style of processing rather than stream-processing.\n",
    "\n",
    "![Near Real Time](./../images/near_real_time.jpg \"Near Real Time\")\n",
    "\n",
    "If the data is immutable by design, incremental processing can be done without any additional ingestion library, just use the *append* primitive supported in HDFS through some HDFS client, such as Spark, e.g:\n",
    "\n",
    "```scala\n",
    "newRecordsDf = (...)\n",
    "newRecordsDf.write.format(\"hive\").mode(\"append\").insertInto(tableName)\n",
    "```\n",
    "\n",
    "Unfortunately, data is rarely immutable in practice. A bank transaction might be reverted, a customer might change his or her home adress, and a customer review might be updated, to give a few examples. This is where Hudi comes into the picture. Hudi stands for `Hadoop Upserts anD Incrementals` and brings two new primitives for data engineering on distributed file systems (in addition to append/read):\n",
    "\n",
    "- `Upsert`: the ability to do insertions (appends) and updates efficiently. \n",
    "- `Incremental reads`: the ability to read datasets incrementally using the notion of \"commits\".\n",
    "\n",
    "![Upserts](./../images/upsert_illustration.png \"Upserts\")\n",
    "\n",
    "Lets consider the process of updating a single record in a data lake of Parquet files stored on a distributed file system. Without using Hudi, this would entail scanning the entire dataset to find the record in order to do the update and then rewrite the entire dataset: \n",
    "\n",
    "```scala\n",
    "updatedRecordsDf = (...)\n",
    "updatedRecordsDf.write.format(\"hive\").mode(\"overwrite\").insertInto(tableName) \n",
    "```\n",
    "\n",
    "This does not scale and HDFS/Parquet is not designed for this use-case. With Hudi, the upsert operation is a first-class primitive in the ingestion framework and it is optimized to be fast using index-lookups and atomic updates. We will see how we can use Hudi for this purpose later on in the notebook, but essentially it is as simple as :\n",
    "\n",
    "```scala\n",
    "updatedRecordsDf = (...)\n",
    "upsertDf.write.format(\"org.apache.hudi\")\n",
    "              .option(\"hoodie.datasource.write.operation\", \"upsert\")\n",
    "              ...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What is Hudi\n",
    "\n",
    "Hudi is a Spark library that is intended to be run as a streaming ingest job, and ingests data as mini-batches (typically on the order of one to two minutes). A Hudi job generally reads delta-updates from a message-bus like Kafka, and upserts them into a data lake stored on a distributed file system. By maintaining bloom indexes and commit logs, Hudi provide ACID transactions, time-travel and scalable upserts.\n",
    "\n",
    "![Hudi Dataset](./../images/hudi_dataset.png \"Hudi Dataset\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### How Hudi can be used for ML and Feature Pipelines\n",
    "\n",
    "Hudi is integrated in the Hopsworks Feature Store for doing incremental feature computation and for point-in-time correctness and backfilling of feature data.\n",
    "\n",
    "![Incremental Feature Engineering](./../images/featurestore_incremental_pull.png \"Incremetal Feature Engineering\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Examples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Starting Spark application\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<table>\n",
       "<tr><th>ID</th><th>YARN Application ID</th><th>Kind</th><th>State</th><th>Spark UI</th><th>Driver log</th><th>Current session?</th></tr><tr><td>9</td><td>application_1571823648811_0102</td><td>spark</td><td>idle</td><td><a target=\"_blank\" href=\"http://ip-172-31-16-142.eu-north-1.compute.internal:8088/proxy/application_1571823648811_0102/\">Link</a></td><td><a target=\"_blank\" href=\"http://ip-172-31-16-142.eu-north-1.compute.internal:8042/node/containerlogs/container_e01_1571823648811_0102_01_000001/demo_featurestore_admin000__meb10000\">Link</a></td><td>✔</td></tr></table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "SparkSession available as 'spark'.\n",
      "import org.apache.hadoop.fs.FileSystem\n",
      "import org.apache.hudi.DataSourceReadOptions\n",
      "import org.apache.hudi.DataSourceWriteOptions\n",
      "import org.apache.hudi.HoodieDataSourceHelpers\n",
      "import org.apache.hudi.NonpartitionedKeyGenerator\n",
      "import org.apache.hudi.SimpleKeyGenerator\n",
      "import org.apache.hudi.common.model.HoodieTableType\n",
      "import org.apache.hudi.config.HoodieWriteConfig\n",
      "import org.apache.hudi.hive.MultiPartKeysValueExtractor\n",
      "import org.apache.hudi.hive.NonPartitionedExtractor\n",
      "import org.apache.log4j.LogManager\n",
      "import org.apache.log4j.Logger\n",
      "import org.apache.spark.api.java.JavaSparkContext\n",
      "import org.apache.spark.sql.DataFrameWriter\n",
      "import org.apache.spark.sql.Dataset\n",
      "import org.apache.spark.sql.Row\n",
      "import org.apache.spark.sql.SaveMode\n",
      "import org.apache.spark.sql.SparkSession\n",
      "import io.hops.util.Hops\n",
      "import org.apache.spark.sql._\n",
      "import spark.implicits._\n",
      "import org.apache.spark.sql.types._\n",
      "import java.sql.Date\n",
      "import java.sql.Timestamp\n",
      "import org.apache.hadoop.fs.{FileSystem, Path}\n"
     ]
    }
   ],
   "source": [
    "import org.apache.hadoop.fs.FileSystem;\n",
    "import org.apache.hudi.DataSourceReadOptions;\n",
    "import org.apache.hudi.DataSourceWriteOptions;\n",
    "import org.apache.hudi.HoodieDataSourceHelpers;\n",
    "import org.apache.hudi.NonpartitionedKeyGenerator;\n",
    "import org.apache.hudi.SimpleKeyGenerator;\n",
    "import org.apache.hudi.common.model.HoodieTableType;\n",
    "import org.apache.hudi.config.HoodieWriteConfig;\n",
    "import org.apache.hudi.hive.MultiPartKeysValueExtractor;\n",
    "import org.apache.hudi.hive.NonPartitionedExtractor;\n",
    "import org.apache.log4j.LogManager;\n",
    "import org.apache.log4j.Logger;\n",
    "import org.apache.spark.api.java.JavaSparkContext;\n",
    "import org.apache.spark.sql.DataFrameWriter;\n",
    "import org.apache.spark.sql.Dataset;\n",
    "import org.apache.spark.sql.Row;\n",
    "import org.apache.spark.sql.SaveMode;\n",
    "import org.apache.spark.sql.SparkSession;\n",
    "import io.hops.util.Hops\n",
    "import org.apache.spark.sql._\n",
    "import spark.implicits._\n",
    "import org.apache.spark.sql.types._\n",
    "import java.sql.Date\n",
    "import java.sql.Timestamp\n",
    "import org.apache.hadoop.fs.{FileSystem, Path}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Bulk Insert of Sample Dataset into a Hudi Dataset\n",
    "\n",
    "Lets first ingest some sample data into a new Hudi dataset. As this is the first ingestion, we don't have to think about whether our ingestion contains any updates, this type of ingestion is referred to as **bulk insert** in Hudi to distinguish it from **upserts** (updates and inserts) and **insert** (only append inserts).\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Generate the sample data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "bulkInsertData: Seq[org.apache.spark.sql.Row] = List([1,2019-03-02,0.4151,Sweden], [2,2019-05-01,1.2151,Ireland], [3,2019-08-06,0.2151,Belgium], [4,2019-08-06,0.8151,Russia])\n",
      "schema: List[org.apache.spark.sql.types.StructField] = List(StructField(id,IntegerType,true), StructField(date,DateType,true), StructField(value,FloatType,true), StructField(country,StringType,true))\n",
      "bulkInsertDf: org.apache.spark.sql.DataFrame = [id: int, date: date ... 2 more fields]\n",
      "+---+----------+------+-------+\n",
      "| id|      date| value|country|\n",
      "+---+----------+------+-------+\n",
      "|  1|2019-03-02|0.4151| Sweden|\n",
      "|  2|2019-05-01|1.2151|Ireland|\n",
      "|  3|2019-08-06|0.2151|Belgium|\n",
      "|  4|2019-08-06|0.8151| Russia|\n",
      "+---+----------+------+-------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "val bulkInsertData = Seq(\n",
    "    Row(1, Date.valueOf(\"2019-02-30\"), 0.4151f, \"Sweden\"),\n",
    "    Row(2, Date.valueOf(\"2019-05-01\"), 1.2151f, \"Ireland\"),\n",
    "    Row(3, Date.valueOf(\"2019-08-06\"), 0.2151f, \"Belgium\"),\n",
    "    Row(4, Date.valueOf(\"2019-08-06\"), 0.8151f, \"Russia\")\n",
    ")\n",
    "val schema = \n",
    " scala.collection.immutable.List(\n",
    "  StructField(\"id\", IntegerType, true),\n",
    "  StructField(\"date\", DateType, true),\n",
    "  StructField(\"value\", FloatType, true),\n",
    "  StructField(\"country\", StringType, true) \n",
    ")\n",
    "val bulkInsertDf = spark.createDataFrame(\n",
    "  spark.sparkContext.parallelize(bulkInsertData),\n",
    "  StructType(schema)\n",
    ")\n",
    "bulkInsertDf.show(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Bulk load the sample data into a new Hudi dataset using the Hudi DataSource API (http://hudi.apache.org/writing_data.html)\n",
    "\n",
    "We will create a new Hudi dataset/table called `hello_hudi_1` (naming convention for Hopsworks feature store is to have table_name_version) with the schema:\n",
    "\n",
    "```\n",
    "+---+----------+------+-------+\n",
    "| id|      date| value|country|\n",
    "+---+----------+------+-------+\n",
    "|  1|2019-03-02|0.4151| Sweden|\n",
    "|  2|2019-05-01|1.2151|Ireland|\n",
    "|  3|2019-08-06|0.2151|Belgium|\n",
    "|  4|2019-08-06|0.8151| Russia|\n",
    "+---+----------+------+-------+\n",
    "```\n",
    "and the dataset will be partitioned on the `date` column. Moreover we will register the hudi dataset with the project's Hive database as an external table. \n",
    "\n",
    "When creating a Hudi dataset there are lots of options that you can tune by overriding the default values by simply chaining option(parameter,value) to the Spark writer. You can find a list of all options available here: http://hudi.apache.org/configurations.html\n",
    "\n",
    "The most important options we will provide are the following:\n",
    "\n",
    "- `format`: this is the format that the Hudi dataset will take, this should be set to `org.apache.hudi`. A hudi dataset consists of Parquet files, bloom index, and timeline metadata (more about the metadata later in this notebook)\n",
    "- `hoodie.table.name`: the name of the Hudi dataset, it will also be used to register the table with query engines like Hive, Presto, and SparkSQL\n",
    "- `hoodie.datasource.write.storage.type`: the storage type. Whether to use CopyOnWrite or MergeOnRead (this is related to Hudi internals that  we will discuss later on in this notebook)\n",
    "- `hoodie.datasource.write.operation`: the operation to perform. Since this is the first time we insert into the table we can use `bulkinsert` and don't have to apply the extra processing for doing upserts.\n",
    "- `hoodie.datasource.write.recordkey.field`: the key to uniquely identify a record in the dataset. This is used by Hudi when deciding whether an upsert is an update or an insert.\n",
    "- `hoodie.datasource.write.partitionpath.field`: the field to partition the dataset on. When Hudi looks up a record in a Hudi dataset, it will first look up the partition (if the dataset is partitioned) and then use an index to look up which file inside the partition that contains the record.\n",
    "- `hoodie.datasource.write.precombine.field`: Field used in preCombining before actual write. When two records have the same key value, we will pick the one with the largest value for the precombine field\n",
    "- `hoodie.datasource.hive_sync.enable`: whether to sync the hudi dataset with the Hive metastore as an external table.\n",
    "- `hoodie.datasource.hive_sync.table`: the hive table name to sync the hudi dataset with (external table)\n",
    "- `hoodie.datasource.hive_sync.database`: the hive database to synchronize the hudi dataset with\n",
    "- `hoodie.datasource.hive_sync.jdbcurl`: the JDBC url for the hive metastore\n",
    "- `hoodie.datasource.hive_sync.partition_fields`: field in the dataset to use for determining hive partition columns.\n",
    "- `mode`: spark write mode. \n",
    "- `save` the path for saving the Hudi dataset on HopsFS"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "trustStore: String = t_certificate\n",
      "pw: String = EJBVJ7UBVK9O0ZFHQAGPMACAYF01PPWQU470BDIMCQAFYLW6G98ACVYKK0B9NRU3\n",
      "keyStore: String = k_certificate\n",
      "hiveDb: String = demo_featurestore_admin000_featurestore\n",
      "jdbcUrl: String = jdbc:hive2://10.0.2.15:9085/demo_featurestore_admin000_featurestore;auth=noSasl;ssl=true;twoWay=true;sslTrustStore=t_certificate;trustStorePassword=EJBVJ7UBVK9O0ZFHQAGPMACAYF01PPWQU470BDIMCQAFYLW6G98ACVYKK0B9NRU3;sslKeyStore=k_certificate;keyStorePassword=EJBVJ7UBVK9O0ZFHQAGPMACAYF01PPWQU470BDIMCQAFYLW6G98ACVYKK0B9NRU3\n",
      "writer: org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] = org.apache.spark.sql.DataFrameWriter@6dacfc3f\n"
     ]
    }
   ],
   "source": [
    "val trustStore = Hops.getTrustStore\n",
    "val pw = Hops.getKeystorePwd\n",
    "val keyStore = Hops.getKeyStore\n",
    "val hiveDb = Hops.getProjectFeaturestore.read\n",
    "val jdbcUrl = (s\"jdbc:hive2://10.0.2.15:9085/$hiveDb;\" \n",
    "                + s\"auth=noSasl;ssl=true;twoWay=true;sslTrustStore=$trustStore;\"\n",
    "                + s\"trustStorePassword=$pw;sslKeyStore=$keyStore;keyStorePassword=$pw\"\n",
    "                )\n",
    "val writer = (bulkInsertDf.write.format(\"org.apache.hudi\")\n",
    "              .option(\"hoodie.table.name\", \"hello_hudi_1\")\n",
    "              .option(\"hoodie.datasource.write.storage.type\", \"COPY_ON_WRITE\")\n",
    "              .option(\"hoodie.datasource.write.operation\", \"bulk_insert\")\n",
    "              .option(\"hoodie.datasource.write.recordkey.field\",\"id\")\n",
    "              .option(\"hoodie.datasource.write.partitionpath.field\", \"date\")\n",
    "              .option(\"hoodie.datasource.write.precombine.field\", \"date\")\n",
    "              .option(\"hoodie.datasource.hive_sync.enable\", \"true\")              \n",
    "              .option(\"hoodie.datasource.hive_sync.table\", \"hello_hudi_1\")\n",
    "              .option(\"hoodie.datasource.hive_sync.database\", hiveDb)\n",
    "              .option(\"hoodie.datasource.hive_sync.jdbcurl\", jdbcUrl)\n",
    "              .option(\"hoodie.datasource.hive_sync.partition_fields\", \"date\")\n",
    "              .option(\"hoodie.datasource.hive_sync.partition_extractor_class\", \"org.apache.hudi.hive.MultiPartKeysValueExtractor\")\n",
    "              .mode(\"overwrite\"))\n",
    "writer.save(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Inspect the results\n",
    "\n",
    "If the Hudi bulk insert was successful we should now see a dataset created at the path `hdfs:///Projects/<projectName>/Resources/hello_hudi_1`. If we list that directory we can see that there are three partitions (recall that we specified the partition field to be `date` and we inserted the dataframe with the contents:\n",
    "\n",
    "```\n",
    "+---+----------+------+-------+\n",
    "| id|      date| value|country|\n",
    "+---+----------+------+-------+\n",
    "|  1|2019-03-02|0.4151| Sweden|\n",
    "|  2|2019-05-01|1.2151|Ireland|\n",
    "|  3|2019-08-06|0.2151|Belgium|\n",
    "|  4|2019-08-06|0.8151| Russia|\n",
    "+---+----------+------+-------+\n",
    "```\n",
    "\n",
    "We can also note that there is a directory called .hoodie. This directory contains Hudi-specific metadata. For example, Hudi maintains timeline-metadata of all the commits made to a Hudi dataset. This enables you to do incremental reads as well as *time travel* (we will look more into this later). I.e in .hoodie there is now a file called `20190830094146.commit` which contains information about the commit that we just made. Inside this file there are various types of metadata about the commit, such as the path to all Parquet files involved in this commit in the various partitions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "hdfs://10.0.2.15:8020/Projects/demo_featurestore_admin000/Resources/hello_hudi_1/.hoodie\n",
      "hdfs://10.0.2.15:8020/Projects/demo_featurestore_admin000/Resources/hello_hudi_1/1551484800000\n",
      "hdfs://10.0.2.15:8020/Projects/demo_featurestore_admin000/Resources/hello_hudi_1/1556668800000\n",
      "hdfs://10.0.2.15:8020/Projects/demo_featurestore_admin000/Resources/hello_hudi_1/1565049600000\n"
     ]
    }
   ],
   "source": [
    "(FileSystem.get(sc.hadoopConfiguration)\n",
    " .listStatus(new Path(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\"))\n",
    " .map(_.getPath).foreach(println)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Inside each partition, the data is stored in regular parquet files: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "hdfs://10.0.2.15:8020/Projects/demo_featurestore_admin000/Resources/hello_hudi_1/1551484800000/.hoodie_partition_metadata\n",
      "hdfs://10.0.2.15:8020/Projects/demo_featurestore_admin000/Resources/hello_hudi_1/1551484800000/e4224951-7ca6-4760-8585-8443f5da18a3-0_0-5-7_20190904114951.parquet\n"
     ]
    }
   ],
   "source": [
    "(FileSystem.get(sc.hadoopConfiguration)\n",
    " .listStatus(new Path(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1/1551484800000/\"))\n",
    " .map(_.getPath).foreach(println)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If we inspect the metadata stored together with the data in the parquet files using a tool such as https://github.com/apache/parquet-mr/tree/master/parquet-tools \n",
    "\n",
    "```\n",
    "/srv/hops/hadoop/bin/hadoop jar /tmp/parquet-tools-1.9.0.jar meta hdfs:///Projects/demo_featurestore_admin000/Resources/hello_hudi_1/1551484800000/1592f902-da1f-44c3-976b-035aebc93278-0_0-37-75_20190830101505.parquet\n",
    "```\n",
    "we can see that inside the parquet files, Hudi stores a BloomIndex so that it quickly can lookup whether a certain record is included inside a parquet file or not.\n",
    "\n",
    "Sample metadata in the parquet file might be:\n",
    "\n",
    "```\n",
    "file:                   hdfs://10.0.2.15:8020/Projects/demo_featurestore_admin000/Resources/hello_hudi/1551484800000/1592f902-da1f-44c3-976b-035aebc93278-0_0-37-75_20190830101505.parquet \n",
    "creator:                parquet-mr version 1.10.0 (build 031a6654009e3b82020012a18434c582bd74c73a) \n",
    "extra:                  org.apache.hudi.bloomfilter = /////wAAAB4BACd9PgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA...\n",
    "extra:                  hoodie_min_record_key = 1 \n",
    "extra:                  parquet.avro.schema = {\"type\":\"record\",\"name\":\"hello_hudi_record\",\"namespace\":\"hoodie.hello_hudi\",\"fields\":[{\"name\":\"_hoodie_commit_time\",\"type\":[\"null\",\"string\"],\"doc\":\"\",\"default\":null},{\"name\":\"_hoodie_commit_seqno\",\"type\":[\"null\",\"string\"],\"doc\":\"\",\"default\":null},{\"name\":\"_hoodie_record_key\",\"type\":[\"null\",\"string\"],\"doc\":\"\",\"default\":null},{\"name\":\"_hoodie_partition_path\",\"type\":[\"null\",\"string\"],\"doc\":\"\",\"default\":null},{\"name\":\"_hoodie_file_name\",\"type\":[\"null\",\"string\"],\"doc\":\"\",\"default\":null},{\"name\":\"id\",\"type\":[\"int\",\"null\"]},{\"name\":\"date\",\"type\":[\"long\",\"null\"]},{\"name\":\"value\",\"type\":[\"float\",\"null\"]},{\"name\":\"country\",\"type\":[\"string\",\"null\"]}]} \n",
    "extra:                  writer.model.name = avro \n",
    "extra:                  hoodie_max_record_key = 1 \n",
    "\n",
    "file schema:            hoodie.hello_hudi.hello_hudi_record \n",
    "--------------------------------------------------------------------------------\n",
    "_hoodie_commit_time:    OPTIONAL BINARY O:UTF8 R:0 D:1\n",
    "_hoodie_commit_seqno:   OPTIONAL BINARY O:UTF8 R:0 D:1\n",
    "_hoodie_record_key:     OPTIONAL BINARY O:UTF8 R:0 D:1\n",
    "_hoodie_partition_path: OPTIONAL BINARY O:UTF8 R:0 D:1\n",
    "_hoodie_file_name:      OPTIONAL BINARY O:UTF8 R:0 D:1\n",
    "id:                     OPTIONAL INT32 R:0 D:1\n",
    "date:                   OPTIONAL INT64 R:0 D:1\n",
    "value:                  OPTIONAL FLOAT R:0 D:1\n",
    "country:                OPTIONAL BINARY O:UTF8 R:0 D:1\n",
    "\n",
    "row group 1:            RC:1 TS:1031 OFFSET:4 \n",
    "--------------------------------------------------------------------------------\n",
    "_hoodie_commit_time:     BINARY GZIP DO:0 FPO:4 SZ:127/109/0.86 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "_hoodie_commit_seqno:    BINARY GZIP DO:0 FPO:131 SZ:152/134/0.88 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "_hoodie_record_key:      BINARY GZIP DO:0 FPO:283 SZ:62/44/0.71 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "_hoodie_partition_path:  BINARY GZIP DO:0 FPO:345 SZ:120/104/0.87 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "_hoodie_file_name:       BINARY GZIP DO:0 FPO:465 SZ:397/386/0.97 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "id:                      INT32 GZIP DO:0 FPO:862 SZ:73/55/0.75 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "date:                    INT64 GZIP DO:0 FPO:935 SZ:95/75/0.79 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "value:                   FLOAT GZIP DO:0 FPO:1030 SZ:75/55/0.73 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "country:                 BINARY GZIP DO:0 FPO:1105 SZ:87/69/0.79 VC:1 ENC:PLAIN,RLE,BIT_PACKED\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Hudi Commits\n",
    "\n",
    "Hudi introduces the notion of `commits` which means that it supports certain properties of traditional databases such as single-table transactions, snapshot isolation, atomic upserts and savepoints for data recovery. If an ingestion fails for some reason, no partial results will be written rather the ingestion will be roll-backed. The commit is implemented using atomic `mv` operation in HDFS. \n",
    "\n",
    "Currently, the hudi dataset contains only a single commit as we've just done a single bulk-insert:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "res5: String = 20190904114951\n"
     ]
    }
   ],
   "source": [
    "HoodieDataSourceHelpers.latestCommit(FileSystem.get(sc.hadoopConfiguration), \n",
    "                                     s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "res6: String = org.apache.hudi.common.table.timeline.HoodieDefaultTimeline: [20190904114951__commit__COMPLETED]\n"
     ]
    }
   ],
   "source": [
    "HoodieDataSourceHelpers.allCompletedCommitsCompactions(FileSystem.get(sc.hadoopConfiguration), \n",
    "                                     s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\").toString"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Query the Hudi Dataset\n",
    "\n",
    "Since we registered the hudi dataset with Hive (table name: `hello_hudi_1`) we can query it from Hive using SparkSQL or some other Hive client. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "res7: org.apache.spark.sql.DataFrame = []\n"
     ]
    }
   ],
   "source": [
    "spark.sql(s\"use ${Hops.getProjectFeaturestore.read}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+--------------------+--------------------+-----------+\n",
      "|            database|           tableName|isTemporary|\n",
      "+--------------------+--------------------+-----------+\n",
      "|demo_featurestore...|attendances_featu...|      false|\n",
      "|demo_featurestore...|    games_features_1|      false|\n",
      "|demo_featurestore...|        hello_hudi_1|      false|\n",
      "|demo_featurestore...|  players_features_1|      false|\n",
      "|demo_featurestore...|season_scores_fea...|      false|\n",
      "+--------------------+--------------------+-----------+\n",
      "only showing top 5 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "spark.sql(\"show tables\").show(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If we inspect the Hive table we can see that Hudi created a bunch of extra columns for us to track lineage of the data, e.g SQL projections on the field `_hoodie_commit_time` can be used to make temporal queries and inspect the value of the table at different time steps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------------------+---------+-------+\n",
      "|col_name               |data_type|comment|\n",
      "+-----------------------+---------+-------+\n",
      "|_hoodie_commit_time    |string   |null   |\n",
      "|_hoodie_commit_seqno   |string   |null   |\n",
      "|_hoodie_record_key     |string   |null   |\n",
      "|_hoodie_partition_path |string   |null   |\n",
      "|_hoodie_file_name      |string   |null   |\n",
      "|id                     |int      |null   |\n",
      "|value                  |float    |null   |\n",
      "|country                |string   |null   |\n",
      "|date                   |bigint   |null   |\n",
      "|# Partition Information|         |       |\n",
      "|# col_name             |data_type|comment|\n",
      "|date                   |bigint   |null   |\n",
      "+-----------------------+---------+-------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "spark.sql(\"describe hello_hudi_1\").show(20, false)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To query the table we have to specify the format `org.apache.hudi` to tell Spark to use the Hudi input format, which will automatically filter the parquet files and only return the data of the latest commit. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "hello_hudi_df: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 7 more fields]\n",
      "warning: there was one deprecation warning; re-run with -deprecation for details\n",
      "+--------------------+---------+-------+\n",
      "|            col_name|data_type|comment|\n",
      "+--------------------+---------+-------+\n",
      "| _hoodie_commit_time|   string|   null|\n",
      "|_hoodie_commit_seqno|   string|   null|\n",
      "|  _hoodie_record_key|   string|   null|\n",
      "|_hoodie_partition...|   string|   null|\n",
      "|   _hoodie_file_name|   string|   null|\n",
      "|                  id|      int|   null|\n",
      "|                date|   bigint|   null|\n",
      "|               value|    float|   null|\n",
      "|             country|   string|   null|\n",
      "+--------------------+---------+-------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "val hello_hudi_df = (spark.read.format(\"org.apache.hudi\")\n",
    "                     .load(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1/*/*\"))\n",
    "hello_hudi_df.registerTempTable(\"hello_hudi_df\")\n",
    "spark.sql(\"describe hello_hudi_df\").show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+------+-------------+-------+\n",
      "| id| value|         date|country|\n",
      "+---+------+-------------+-------+\n",
      "|  2|1.2151|1556668800000|Ireland|\n",
      "|  4|0.8151|1565049600000| Russia|\n",
      "|  3|0.2151|1565049600000|Belgium|\n",
      "|  1|0.4151|1551484800000| Sweden|\n",
      "+---+------+-------------+-------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "spark.sql(\"select id, value, date, country from hello_hudi_df\").show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Upsert into a Hudi Dataset\n",
    "\n",
    "So far we have not done anything hudi-special, we simply did a regular bulk-insert of some data into a Hudi dataset. We could have done the same thing using just regular Spark without Hudi. However now we will look into how we can do upserts, and how Hudi enables us to do this efficiently."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Generate Sample Upserts Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "upsertData: Seq[org.apache.spark.sql.Row] = List([5,2019-03-02,0.7921,Northern Ireland], [1,2019-05-01,1.151,Norway], [3,2019-08-06,0.999,Belgium], [6,2019-08-06,0.0151,France])\n",
      "upsertDf: org.apache.spark.sql.DataFrame = [id: int, date: date ... 2 more fields]\n",
      "+---+----------+------+----------------+\n",
      "| id|      date| value|         country|\n",
      "+---+----------+------+----------------+\n",
      "|  5|2019-03-02|0.7921|Northern Ireland|\n",
      "|  1|2019-05-01| 1.151|          Norway|\n",
      "|  3|2019-08-06| 0.999|         Belgium|\n",
      "|  6|2019-08-06|0.0151|          France|\n",
      "+---+----------+------+----------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "val upsertData = Seq(\n",
    "    Row(5, Date.valueOf(\"2019-02-30\"), 0.7921f, \"Northern Ireland\"), //Insert\n",
    "    Row(1, Date.valueOf(\"2019-05-01\"), 1.151f, \"Norway\"), //Update\n",
    "    Row(3, Date.valueOf(\"2019-08-06\"), 0.999f, \"Belgium\"), //Update\n",
    "    Row(6, Date.valueOf(\"2019-08-06\"), 0.0151f, \"France\") //Insert\n",
    ")\n",
    "val upsertDf = spark.createDataFrame(\n",
    "  spark.sparkContext.parallelize(upsertData),\n",
    "  StructType(schema)\n",
    ")\n",
    "upsertDf.show(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Make the Upsert using Hudi\n",
    "\n",
    "1. Change `hoodie.datasource.write.operation` from `bulk_insert` to `upsert`. \n",
    "2. Change spark write mode from \"overwrite\" to \"append\".\n",
    "3. Change `bulkInsertDf` to `upsertDf`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "trustStore: String = t_certificate\n",
      "pw: String = EJBVJ7UBVK9O0ZFHQAGPMACAYF01PPWQU470BDIMCQAFYLW6G98ACVYKK0B9NRU3\n",
      "keyStore: String = k_certificate\n",
      "hiveDb: String = demo_featurestore_admin000_featurestore\n",
      "jdbcUrl: String = jdbc:hive2://10.0.2.15:9085/demo_featurestore_admin000_featurestore;auth=noSasl;ssl=true;twoWay=true;sslTrustStore=t_certificate;trustStorePassword=EJBVJ7UBVK9O0ZFHQAGPMACAYF01PPWQU470BDIMCQAFYLW6G98ACVYKK0B9NRU3;sslKeyStore=k_certificate;keyStorePassword=EJBVJ7UBVK9O0ZFHQAGPMACAYF01PPWQU470BDIMCQAFYLW6G98ACVYKK0B9NRU3\n",
      "writer: org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] = org.apache.spark.sql.DataFrameWriter@6e378681\n"
     ]
    }
   ],
   "source": [
    "val trustStore = Hops.getTrustStore\n",
    "val pw = Hops.getKeystorePwd\n",
    "val keyStore = Hops.getKeyStore\n",
    "val hiveDb = Hops.getProjectFeaturestore.read\n",
    "val jdbcUrl = (s\"jdbc:hive2://10.0.2.15:9085/$hiveDb;\" \n",
    "                + s\"auth=noSasl;ssl=true;twoWay=true;sslTrustStore=$trustStore;\"\n",
    "                + s\"trustStorePassword=$pw;sslKeyStore=$keyStore;keyStorePassword=$pw\"\n",
    "                )\n",
    "val writer = (upsertDf.write.format(\"org.apache.hudi\")\n",
    "              .option(\"hoodie.table.name\", \"hello_hudi_1\")\n",
    "              .option(\"hoodie.datasource.write.storage.type\", \"COPY_ON_WRITE\")\n",
    "              .option(\"hoodie.datasource.write.operation\", \"upsert\")\n",
    "              .option(\"hoodie.datasource.write.recordkey.field\",\"id\")\n",
    "              .option(\"hoodie.datasource.write.partitionpath.field\", \"date\")\n",
    "              .option(\"hoodie.datasource.write.precombine.field\", \"date\")\n",
    "              .option(\"hoodie.datasource.hive_sync.enable\", \"true\")              \n",
    "              .option(\"hoodie.datasource.hive_sync.table\", \"hello_hudi_1\")\n",
    "              .option(\"hoodie.datasource.hive_sync.database\", hiveDb)\n",
    "              .option(\"hoodie.datasource.hive_sync.jdbcurl\", jdbcUrl)\n",
    "              .option(\"hoodie.datasource.hive_sync.partition_fields\", \"date\")\n",
    "              .option(\"hoodie.datasource.hive_sync.partition_extractor_class\", \"org.apache.hudi.hive.MultiPartKeysValueExtractor\")\n",
    "              .mode(\"append\"))\n",
    "writer.save(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Inspect the results\n",
    "\n",
    "Notice that although Hudi stores the old value of the records from the previous commit, when you query the hive table using the `org.apache.hudi` file format, it will only return the values of the latest commit."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+------+-------------+----------------+\n",
      "| id| value|         date|         country|\n",
      "+---+------+-------------+----------------+\n",
      "|  3| 0.999|1565049600000|         Belgium|\n",
      "|  1|0.4151|1551484800000|          Sweden|\n",
      "|  5|0.7921|1551484800000|Northern Ireland|\n",
      "|  2|1.2151|1556668800000|         Ireland|\n",
      "|  1| 1.151|1556668800000|          Norway|\n",
      "|  4|0.8151|1565049600000|          Russia|\n",
      "|  6|0.0151|1565049600000|          France|\n",
      "+---+------+-------------+----------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "spark.sql(\"select id, value, date, country from hello_hudi_1\").show(20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Inspect the updated commit timeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "res16: String = 20190904115157\n"
     ]
    }
   ],
   "source": [
    "HoodieDataSourceHelpers.latestCommit(FileSystem.get(sc.hadoopConfiguration), \n",
    "                                     s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "res17: String = org.apache.hudi.common.table.timeline.HoodieDefaultTimeline: [20190904114951__commit__COMPLETED],[20190904115157__commit__COMPLETED]\n"
     ]
    }
   ],
   "source": [
    "HoodieDataSourceHelpers.allCompletedCommitsCompactions(FileSystem.get(sc.hadoopConfiguration), \n",
    "                                     s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\").toString"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "timeline: org.apache.hudi.common.table.HoodieTimeline = org.apache.hudi.common.table.timeline.HoodieDefaultTimeline: [20190904114951__commit__COMPLETED],[20190904115157__commit__COMPLETED]\n",
      "firstTimestamp: String = 20190904114951\n",
      "secondTimestamp: String = 20190904115157\n"
     ]
    }
   ],
   "source": [
    "val timeline = HoodieDataSourceHelpers.allCompletedCommitsCompactions(FileSystem.get(sc.hadoopConfiguration), \n",
    "                                     s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\")\n",
    "val firstTimestamp = timeline.firstInstant.get.getTimestamp\n",
    "val secondTimestamp = timeline.nthInstant(1).get.getTimestamp"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Time Travel\n",
    "\n",
    "Using the timeline metadata we can inspect the value of a table at a specific point in time. We can pull changes incrementally from Hudi. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+------+-------------+-------+\n",
      "| id| value|         date|country|\n",
      "+---+------+-------------+-------+\n",
      "|  1|0.4151|1551484800000| Sweden|\n",
      "|  2|1.2151|1556668800000|Ireland|\n",
      "|  4|0.8151|1565049600000| Russia|\n",
      "+---+------+-------------+-------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "spark.sql(s\"select id, value, date, country from hello_hudi_1 where _hoodie_commit_time=$firstTimestamp\").show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+------+-------------+----------------+\n",
      "| id| value|         date|         country|\n",
      "+---+------+-------------+----------------+\n",
      "|  3| 0.999|1565049600000|         Belgium|\n",
      "|  5|0.7921|1551484800000|Northern Ireland|\n",
      "|  1| 1.151|1556668800000|          Norway|\n",
      "|  6|0.0151|1565049600000|          France|\n",
      "+---+------+-------------+----------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "spark.sql(s\"select id, value, date, country from hello_hudi_1 where _hoodie_commit_time=$secondTimestamp\").show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Hudi also has a feature for incremental reads, to use this we have to change the view-type option from the default \"read optimized\" to \"incremental\", this is done using the configuration parameter: `hoodie.datasource.view.type`. We also have to specify from which commit to we want to pull the changes, using the properties `hoodie.datasource.read.begin.instanttime` and `hoodie.datasource.read.end.instanttime`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "incrementalDf: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 7 more fields]\n",
      "warning: there was one deprecation warning; re-run with -deprecation for details\n",
      "+---+------+-------------+----------------+\n",
      "| id| value|         date|         country|\n",
      "+---+------+-------------+----------------+\n",
      "|  3| 0.999|1565049600000|         Belgium|\n",
      "|  5|0.7921|1551484800000|Northern Ireland|\n",
      "|  1| 1.151|1556668800000|          Norway|\n",
      "|  6|0.0151|1565049600000|          France|\n",
      "+---+------+-------------+----------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "// Pull changes that happened *after* the first commit\n",
    "val incrementalDf = (spark.read.format(\"org.apache.hudi\")\n",
    "             .option(\"hoodie.datasource.view.type\", \"incremental\")\n",
    "             .option(\"hoodie.datasource.read.begin.instanttime\", firstTimestamp) \n",
    "             .load(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\"))\n",
    "incrementalDf.registerTempTable(\"incremental_df\")\n",
    "spark.sql(\"select id, value, date, country from incremental_df\").show(20)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "incrementalDf: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 7 more fields]\n",
      "warning: there was one deprecation warning; re-run with -deprecation for details\n",
      "+---+------+-------------+----------------+\n",
      "| id| value|         date|         country|\n",
      "+---+------+-------------+----------------+\n",
      "|  3| 0.999|1565049600000|         Belgium|\n",
      "|  1|0.4151|1551484800000|          Sweden|\n",
      "|  5|0.7921|1551484800000|Northern Ireland|\n",
      "|  2|1.2151|1556668800000|         Ireland|\n",
      "|  1| 1.151|1556668800000|          Norway|\n",
      "|  4|0.8151|1565049600000|          Russia|\n",
      "|  6|0.0151|1565049600000|          France|\n",
      "+---+------+-------------+----------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "// Pull changes that include both commits (from 2017):\n",
    "val incrementalDf = (spark.read.format(\"org.apache.hudi\")\n",
    "             .option(\"hoodie.datasource.view.type\", \"incremental\")\n",
    "             .option(\"hoodie.datasource.read.begin.instanttime\", \"20170830115554\") \n",
    "             .load(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\"))\n",
    "incrementalDf.registerTempTable(\"incremental_df\")\n",
    "spark.sql(\"select id, value, date, country from incremental_df\").show(20)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "incrementalDf: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 7 more fields]\n",
      "warning: there was one deprecation warning; re-run with -deprecation for details\n",
      "+---+------+-------------+-------+\n",
      "| id| value|         date|country|\n",
      "+---+------+-------------+-------+\n",
      "|  2|1.2151|1556668800000|Ireland|\n",
      "|  4|0.8151|1565049600000| Russia|\n",
      "|  3|0.2151|1565049600000|Belgium|\n",
      "|  1|0.4151|1551484800000| Sweden|\n",
      "+---+------+-------------+-------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//Pull only the first commit\n",
    "val incrementalDf = (spark.read.format(\"org.apache.hudi\")\n",
    "             .option(\"hoodie.datasource.view.type\", \"incremental\")\n",
    "             .option(\"hoodie.datasource.read.begin.instanttime\", \"20170830115554\")\n",
    "             .option(\"hoodie.datasource.read.end.instanttime\", firstTimestamp)\n",
    "             .load(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\"))\n",
    "incrementalDf.registerTempTable(\"incremental_df\")\n",
    "spark.sql(\"select id, value, date, country from incremental_df\").show(20)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "incrementalDf: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 7 more fields]\n",
      "warning: there was one deprecation warning; re-run with -deprecation for details\n",
      "+---+------+-------------+----------------+\n",
      "| id| value|         date|         country|\n",
      "+---+------+-------------+----------------+\n",
      "|  3| 0.999|1565049600000|         Belgium|\n",
      "|  5|0.7921|1551484800000|Northern Ireland|\n",
      "|  1| 1.151|1556668800000|          Norway|\n",
      "|  6|0.0151|1565049600000|          France|\n",
      "+---+------+-------------+----------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "//Pull only the second commit\n",
    "val incrementalDf = (spark.read.format(\"org.apache.hudi\")\n",
    "             .option(\"hoodie.datasource.view.type\", \"incremental\")\n",
    "             .option(\"hoodie.datasource.read.begin.instanttime\", firstTimestamp)\n",
    "             .option(\"hoodie.datasource.read.end.instanttime\", secondTimestamp)\n",
    "             .load(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1\"))\n",
    "incrementalDf.registerTempTable(\"incremental_df\")\n",
    "spark.sql(\"select id, value, date, country from incremental_df\").show(20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Integration with Hopsworks Feature Store\n",
    "\n",
    "So far we have created a Hudi dataset at the path `hdfs:///Projects/${Hops.getProjectName}/Resources/hello_hudi_1` and registered with the Hive metastore as an external table with the name `hello_hudi_1` in the featurestore Hive database (`Hops.getProjectFeaturestore.read`).\n",
    "\n",
    "But the dataset have not yet been registered with the Feature store. To register the table with the feature store, we can use the Featurestore Scala SDK and the `syncHiveTableWithFeaturestore` method:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "import scala.collection.JavaConverters._\n",
      "games_features_1\n",
      "games_features_on_demand_tour_1\n",
      "season_scores_features_1\n",
      "attendances_features_1\n",
      "players_features_1\n",
      "teams_features_1\n",
      "res32: scala.collection.mutable.Buffer[Unit] = ArrayBuffer((), (), (), (), (), ())\n"
     ]
    }
   ],
   "source": [
    "import scala.collection.JavaConverters._\n",
    "Hops.getFeaturegroups.read.asScala.map(println)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---------------------------------------+------------------------+-----------+\n",
      "|database                               |tableName               |isTemporary|\n",
      "+---------------------------------------+------------------------+-----------+\n",
      "|demo_featurestore_admin000_featurestore|attendances_features_1  |false      |\n",
      "|demo_featurestore_admin000_featurestore|games_features_1        |false      |\n",
      "|demo_featurestore_admin000_featurestore|hello_hudi_1            |false      |\n",
      "|demo_featurestore_admin000_featurestore|players_features_1      |false      |\n",
      "|demo_featurestore_admin000_featurestore|season_scores_features_1|false      |\n",
      "+---------------------------------------+------------------------+-----------+\n",
      "only showing top 5 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "spark.sql(\"show tables\").show(5, false)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "Hops.syncHiveTableWithFeaturestore(\"hello_hudi\").setVersion(1).setDescription(\"test\").write()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can verify that the Hudi dataset is now registered with the feature  store by going to the Feature store UI.\n",
    "\n",
    "We can also list the names of all available feature groups using the method `getFeaturegroups`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "import scala.collection.JavaConverters._\n",
      "games_features_1\n",
      "games_features_on_demand_tour_1\n",
      "season_scores_features_1\n",
      "attendances_features_1\n",
      "players_features_1\n",
      "teams_features_1\n",
      "hello_hudi_1\n",
      "res35: scala.collection.mutable.Buffer[Unit] = ArrayBuffer((), (), (), (), (), (), ())\n"
     ]
    }
   ],
   "source": [
    "import scala.collection.JavaConverters._\n",
    "Hops.getFeaturegroups.read.asScala.map(println)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once the hudi dataset have been registered with the Feature Store, it can be read by using `getFeaturegroup`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+----------------+-------------+\n",
      "|_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path|   _hoodie_file_name| id| value|         country|         date|\n",
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+----------------+-------------+\n",
      "|     20190904115157|  20190904115157_0_5|                 3|         1565049600000|9d24f72d-e7da-497...|  3| 0.999|         Belgium|1565049600000|\n",
      "|     20190904114951|  20190904114951_0_1|                 1|         1551484800000|e4224951-7ca6-476...|  1|0.4151|          Sweden|1551484800000|\n",
      "|     20190904115157|  20190904115157_1_6|                 5|         1551484800000|e4224951-7ca6-476...|  5|0.7921|Northern Ireland|1551484800000|\n",
      "|     20190904114951|  20190904114951_1_2|                 2|         1556668800000|7b71c5fc-73e3-481...|  2|1.2151|         Ireland|1556668800000|\n",
      "|     20190904115157|  20190904115157_2_7|                 1|         1556668800000|7b71c5fc-73e3-481...|  1| 1.151|          Norway|1556668800000|\n",
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+----------------+-------------+\n",
      "only showing top 5 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "Hops.getFeaturegroup(\"hello_hudi\").setVersion(1).read().show(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also query the hudi dataset directly with SQL from the feature store SDK:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+------+-------------+----------------+\n",
      "| id| value|         date|         country|\n",
      "+---+------+-------------+----------------+\n",
      "|  3| 0.999|1565049600000|         Belgium|\n",
      "|  1|0.4151|1551484800000|          Sweden|\n",
      "|  5|0.7921|1551484800000|Northern Ireland|\n",
      "|  2|1.2151|1556668800000|         Ireland|\n",
      "|  1| 1.151|1556668800000|          Norway|\n",
      "+---+------+-------------+----------------+\n",
      "only showing top 5 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "Hops.queryFeaturestore(\"select id, value, date, country from hello_hudi_1\").read.show(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It is also possible to use the Feature store API directly for creating feature groups with `Hops.createFeaturegroup().setHudi(true)` this will create the Hudi dataset and register it with Hive and the Feature store. It will set good Hudi defaults, but you can override the defaults by providing your own Map<String,String> with hudi arguments: `Hops.createFeaturegroup().setHudi(true).setHudiArgs(map)`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "import scala.collection.JavaConversions._\n",
      "import collection.JavaConverters._\n",
      "sampleData: Seq[org.apache.spark.sql.Row] = List([1,2019-03-02,0.4151,Sweden], [2,2019-05-01,1.2151,Ireland], [3,2019-08-06,0.2151,Belgium], [4,2019-08-06,0.8151,Russia])\n",
      "schema: List[org.apache.spark.sql.types.StructField] = List(StructField(id,IntegerType,true), StructField(date,DateType,true), StructField(value,FloatType,true), StructField(country,StringType,true))\n",
      "sampleDf: org.apache.spark.sql.DataFrame = [id: int, date: date ... 2 more fields]\n",
      "partitionCols: List[String] = List(date)\n"
     ]
    }
   ],
   "source": [
    "import scala.collection.JavaConversions._\n",
    "import collection.JavaConverters._\n",
    "val sampleData = Seq(\n",
    "    Row(1, Date.valueOf(\"2019-02-30\"), 0.4151f, \"Sweden\"),\n",
    "    Row(2, Date.valueOf(\"2019-05-01\"), 1.2151f, \"Ireland\"),\n",
    "    Row(3, Date.valueOf(\"2019-08-06\"), 0.2151f, \"Belgium\"),\n",
    "    Row(4, Date.valueOf(\"2019-08-06\"), 0.8151f, \"Russia\")\n",
    ")\n",
    "val schema = \n",
    " scala.collection.immutable.List(\n",
    "  StructField(\"id\", IntegerType, true),\n",
    "  StructField(\"date\", DateType, true),\n",
    "  StructField(\"value\", FloatType, true),\n",
    "  StructField(\"country\", StringType, true) \n",
    ")\n",
    "val sampleDf = spark.createDataFrame(\n",
    "  spark.sparkContext.parallelize(sampleData),\n",
    "  StructType(schema)\n",
    ")\n",
    "val partitionCols = List(\"date\")\n",
    "(Hops.createFeaturegroup(\"hudi_featuregroup_test\")\n",
    "                         .setHudi(true)\n",
    "                         .setPartitionBy(partitionCols)\n",
    "                         .setDataframe(sampleDf)\n",
    "                         .setPrimaryKey(List(\"id\")).write())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+-------+-------------+\n",
      "|_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path|   _hoodie_file_name| id| value|country|         date|\n",
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+-------+-------------+\n",
      "|     20190904130344|  20190904130344_1_6|                 2|         1556668800000|882d1680-3553-4be...|  2|1.2151|Ireland|1556668800000|\n",
      "|     20190904130344|  20190904130344_2_7|                 3|         1565049600000|98dc24de-d156-4eb...|  3|0.2151|Belgium|1565049600000|\n",
      "|     20190904130344|  20190904130344_0_5|                 1|         1551484800000|38d72e06-07e3-444...|  1|0.4151| Sweden|1551484800000|\n",
      "|     20190904130344|  20190904130344_3_8|                 4|         1565049600000|935c8005-58d6-463...|  4|0.8151| Russia|1565049600000|\n",
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+-------+-------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "Hops.getFeaturegroup(\"hudi_featuregroup_test\").read.show(20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can also override Hudi specific arguments using the `setHudiArgs` and `setHudiBasePath` methods:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "hudiArgs: scala.collection.immutable.Map[String,String] = Map(hoodie.datasource.write.payload.class -> org.apache.hudi.OverwriteWithLatestAvroPayload)\n"
     ]
    }
   ],
   "source": [
    "val hudiArgs = Map[String, String](\n",
    "    \"hoodie.datasource.write.payload.class\"-> \"org.apache.hudi.OverwriteWithLatestAvroPayload\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "(Hops.createFeaturegroup(\"hudi_featuregroup_test_second\")\n",
    "                         .setHudi(true)\n",
    "                         .setPartitionBy(partitionCols)\n",
    "                         .setDataframe(sampleDf)\n",
    "                         .setHudiBasePath(s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hudi_featuregroup_test_second\")\n",
    "                         .setHudiArgs(hudiArgs)\n",
    "                         .setPrimaryKey(List(\"id\")).write())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+-------+-------------+\n",
      "|_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path|   _hoodie_file_name| id| value|country|         date|\n",
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+-------+-------------+\n",
      "|     20191029123951| 20191029123951_2_11|                 3|         1565049600000|54cbcf61-8a4b-413...|  3|0.2151|Belgium|1565049600000|\n",
      "|     20191029123951| 20191029123951_1_10|                 2|         1556668800000|12da84f8-4b8e-4cb...|  2|1.2151|Ireland|1556668800000|\n",
      "|     20191029123951| 20191029123951_3_12|                 4|         1565049600000|9b9ed741-eafb-4ca...|  4|0.8151| Russia|1565049600000|\n",
      "|     20191029123951|  20191029123951_0_9|                 1|         1551484800000|5101dd1d-6735-47e...|  1|0.4151| Sweden|1551484800000|\n",
      "+-------------------+--------------------+------------------+----------------------+--------------------+---+------+-------+-------------+\n",
      "\n"
     ]
    }
   ],
   "source": [
    "Hops.getFeaturegroup(\"hudi_featuregroup_test_second\").read.show(20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also utilize the `insertIntoFeaturegroup` wrapper to make Upserts into Hudi datasets:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "import scala.collection.JavaConversions._\n",
      "import collection.JavaConverters._\n",
      "upsertData: Seq[org.apache.spark.sql.Row] = List([5,2019-03-02,0.7921,Northern Ireland], [1,2019-05-01,1.151,Norway], [3,2019-08-06,0.999,Belgium], [6,2019-08-06,0.0151,France])\n",
      "schema: List[org.apache.spark.sql.types.StructField] = List(StructField(id,IntegerType,true), StructField(date,DateType,true), StructField(value,FloatType,true), StructField(country,StringType,true))\n",
      "upsertDf: org.apache.spark.sql.DataFrame = [id: int, date: date ... 2 more fields]\n",
      "partitionCols: List[String] = List(date)\n"
     ]
    }
   ],
   "source": [
    "import scala.collection.JavaConversions._\n",
    "import collection.JavaConverters._\n",
    "val upsertData = Seq(\n",
    "    Row(5, Date.valueOf(\"2019-02-30\"), 0.7921f, \"Northern Ireland\"), //Insert\n",
    "    Row(1, Date.valueOf(\"2019-05-01\"), 1.151f, \"Norway\"), //Update\n",
    "    Row(3, Date.valueOf(\"2019-08-06\"), 0.999f, \"Belgium\"), //Update\n",
    "    Row(6, Date.valueOf(\"2019-08-06\"), 0.0151f, \"France\") //Insert\n",
    ")\n",
    "val schema = \n",
    " scala.collection.immutable.List(\n",
    "  StructField(\"id\", IntegerType, true),\n",
    "  StructField(\"date\", DateType, true),\n",
    "  StructField(\"value\", FloatType, true),\n",
    "  StructField(\"country\", StringType, true) \n",
    ")\n",
    "val upsertDf = spark.createDataFrame(\n",
    "  spark.sparkContext.parallelize(upsertData),\n",
    "  StructType(schema)\n",
    ")\n",
    "val partitionCols = List(\"date\")\n",
    "(Hops.insertIntoFeaturegroup(\"hudi_featuregroup_test\")\n",
    "                         .setPartitionBy(partitionCols)\n",
    "                         .setDataframe(upsertDf)\n",
    "                         .setMode(\"append\")\n",
    "                         .setPrimaryKey(List(\"id\")).write())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+---+------+-------------+----------------+\n",
      "| id| value|         date|         country|\n",
      "+---+------+-------------+----------------+\n",
      "|  3| 0.999|1565049600000|         Belgium|\n",
      "|  6|0.0151|1565049600000|          France|\n",
      "|  4|0.8151|1565049600000|          Russia|\n",
      "|  1|0.4151|1551484800000|          Sweden|\n",
      "|  5|0.7921|1551484800000|Northern Ireland|\n",
      "+---+------+-------------+----------------+\n",
      "only showing top 5 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "Hops.queryFeaturestore(\"select id, value, date, country from hudi_featuregroup_test_1\").read.show(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "timeline: org.apache.hudi.common.table.HoodieTimeline = org.apache.hudi.common.table.timeline.HoodieDefaultTimeline: [20190904140311__commit__COMPLETED],[20190904140651__commit__COMPLETED]\n"
     ]
    }
   ],
   "source": [
    "val timeline = HoodieDataSourceHelpers.allCompletedCommitsCompactions(FileSystem.get(sc.hadoopConfiguration), \n",
    "                                     s\"hdfs:///Projects/${Hops.getProjectName}/Resources/hudi_featuregroup_test_1\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "firstTimestamp: String = 20190904140311\n"
     ]
    }
   ],
   "source": [
    "val firstTimestamp = timeline.firstInstant.get.getTimestamp"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "secondTimestamp: String = 20190904140651\n"
     ]
    }
   ],
   "source": [
    "val secondTimestamp = timeline.nthInstant(1).get.getTimestamp"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Spark",
   "language": "",
   "name": "sparkkernel"
  },
  "language_info": {
   "codemirror_mode": "text/x-scala",
   "mimetype": "text/x-scala",
   "name": "scala",
   "pygments_lexer": "scala"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}