{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Enable Amazon SageMaker Model Monitor\n",
    "\n",
    "Amazon SageMaker provides the ability to monitor machine learning models in production and detect deviations in data quality in comparison to a baseline dataset (e.g. training data set). This notebook walks you through enabling data capture and setting up continous monitoring for an existing Endpoint.\n",
    "\n",
    "This Notebook helps with the following:\n",
    "* Update your existing SageMaker Endpoint to enable Model Monitoring\n",
    "* Analyze the training dataset to generate a baseline constraint\n",
    "* Setup a MonitoringSchedule for monitoring deviations from the specified baseline\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Step 1: Enable real-time inference data capture"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To enable data capture for monitoring the model data quality, you specify the new capture option called `DataCaptureConfig`. You can capture the request payload, the response payload or both with this configuration. The capture config applies to all variants. Please provide the Endpoint name in the following cell:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Please fill in the following for enabling data capture\n",
    "endpoint_name = 'antje-automl-demo-ep'\n",
    "s3_capture_upload_path = 's3://dsoaws-amazon-reviews/monitoring/' #example: s3://bucket-name/path/to/endpoint-data-capture/\n",
    "\n",
    "##### \n",
    "## IMPORTANT\n",
    "##\n",
    "## Please make sure to add the \"s3:PutObject\" permission to the \"role' you provided in the SageMaker Model \n",
    "## behind this Endpoint. Otherwise, Endpoint data capture will not work.\n",
    "## \n",
    "##### "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "-------------------!!"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'EndpointName': 'antje-automl-demo-ep',\n",
       " 'EndpointArn': 'arn:aws:sagemaker:us-west-2:806570384721:endpoint/antje-automl-demo-ep',\n",
       " 'EndpointConfigName': 'antje-automl-demo-ep-2020-05-04-09-17-34-699',\n",
       " 'ProductionVariants': [{'VariantName': 'default-variant-name',\n",
       "   'DeployedImages': [{'SpecifiedImage': '246618743249.dkr.ecr.us-west-2.amazonaws.com/sagemaker-sklearn-automl:0.1.0-cpu-py3',\n",
       "     'ResolvedImage': '246618743249.dkr.ecr.us-west-2.amazonaws.com/sagemaker-sklearn-automl@sha256:0ef37807a4d7cb0c7eb670044fa67345bc3e15d5734ea0a9624357a9bd3d26cb',\n",
       "     'ResolutionTime': datetime.datetime(2020, 5, 4, 9, 17, 38, 378000, tzinfo=tzlocal())},\n",
       "    {'SpecifiedImage': '246618743249.dkr.ecr.us-west-2.amazonaws.com/sagemaker-xgboost:0.90-1-cpu-py3',\n",
       "     'ResolvedImage': '246618743249.dkr.ecr.us-west-2.amazonaws.com/sagemaker-xgboost@sha256:97ec7833b3e2773d3924b1a863c5742e348dea61eab21b90693ac3c3bdd08522',\n",
       "     'ResolutionTime': datetime.datetime(2020, 5, 4, 9, 17, 38, 429000, tzinfo=tzlocal())}],\n",
       "   'CurrentWeight': 1.0,\n",
       "   'DesiredWeight': 1.0,\n",
       "   'CurrentInstanceCount': 1,\n",
       "   'DesiredInstanceCount': 1}],\n",
       " 'DataCaptureConfig': {'EnableCapture': True,\n",
       "  'CaptureStatus': 'Started',\n",
       "  'CurrentSamplingPercentage': 50,\n",
       "  'DestinationS3Uri': 's3://dsoaws-amazon-reviews/monitoring/'},\n",
       " 'EndpointStatus': 'InService',\n",
       " 'CreationTime': datetime.datetime(2020, 5, 4, 9, 6, 0, 739000, tzinfo=tzlocal()),\n",
       " 'LastModifiedTime': datetime.datetime(2020, 5, 4, 9, 26, 47, 704000, tzinfo=tzlocal()),\n",
       " 'ResponseMetadata': {'RequestId': '2e66d98d-839c-4a74-8954-0b857fd0d52f',\n",
       "  'HTTPStatusCode': 200,\n",
       "  'HTTPHeaders': {'x-amzn-requestid': '2e66d98d-839c-4a74-8954-0b857fd0d52f',\n",
       "   'content-type': 'application/x-amz-json-1.1',\n",
       "   'content-length': '1207',\n",
       "   'date': 'Mon, 04 May 2020 09:27:07 GMT'},\n",
       "  'RetryAttempts': 0}}"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sagemaker.model_monitor import DataCaptureConfig\n",
    "from sagemaker import RealTimePredictor\n",
    "from sagemaker import session\n",
    "import boto3\n",
    "sm_session = session.Session(boto3.Session())\n",
    "\n",
    "# Change parameters as you would like - adjust sampling percentage, \n",
    "#  chose to capture request or response or both.\n",
    "#  Learn more from our documentation\n",
    "data_capture_config = DataCaptureConfig(\n",
    "                        enable_capture = True,\n",
    "                        sampling_percentage=50,\n",
    "                        destination_s3_uri=s3_capture_upload_path,\n",
    "                        kms_key_id=None,\n",
    "                        capture_options=[\"REQUEST\", \"RESPONSE\"],\n",
    "                        csv_content_types=[\"text/csv\"],\n",
    "                        json_content_types=[\"application/json\"])\n",
    "\n",
    "# Now it is time to apply the new configuration and wait for it to be applied\n",
    "predictor = RealTimePredictor(endpoint=endpoint_name)\n",
    "predictor.update_data_capture_config(data_capture_config=data_capture_config)\n",
    "sm_session.wait_for_endpoint(endpoint=endpoint_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Before you proceed:\n",
    "Currently SageMaker supports monitoring Endpoints out of the box only for **tabular (csv, flat-json)** datasets. If your Endpoint uses some other datasets, these following steps will NOT work for you.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Step 2: Model Monitor - Baselining"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In addition to collecting the data, SageMaker allows you to monitor and evaluate the data observed by the Endpoints. For this :\n",
    "1. We need to create a baseline with which we compare the realtime traffic against. \n",
    "1. Once a baseline is ready, we can setup a schedule to continously evaluate/compare against the baseline."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Constraint suggestion with baseline/training dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The training dataset with which you trained the model is usually a good baseline dataset. Note that the training dataset's data schema and the inference dataset schema should exactly match (i.e. number and order of the features).\n",
    "\n",
    "Using our training dataset, we'll ask SageMaker to suggest a set of baseline constraints and generate descriptive statistics to explore the data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Baseline data uri: s3://dsoaws-amazon-reviews/data/amazon_reviews_us_Digital_Software_v1_00_header.csv\n",
      "Baseline results uri: s3://dsoaws-amazon-reviews/data/baseline/\n"
     ]
    }
   ],
   "source": [
    "baseline_data_uri = 's3://dsoaws-amazon-reviews/data/amazon_reviews_us_Digital_Software_v1_00_header.csv' ##'s3://bucketname/path/to/baseline/data' - Where your training data is\n",
    "baseline_results_uri = 's3://dsoaws-amazon-reviews/data/baseline/' ##'s3://bucketname/path/to/baseline/data' - Where the results are to be stored in\n",
    "\n",
    "print('Baseline data uri: {}'.format(baseline_data_uri))\n",
    "print('Baseline results uri: {}'.format(baseline_results_uri))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Create a baselining job with the training dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we have the training data ready in S3, let's kick off a job to `suggest` constraints. `DefaultModelMonitor.suggest_baseline(..)` kicks off a `ProcessingJob` using a SageMaker provided Model Monitor container to generate the constraints. Please edit the configurations to fit your needs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Job Name:  baseline-suggestion-job-2020-05-04-09-37-30-762\n",
      "Inputs:  [{'InputName': 'baseline_dataset_input', 'S3Input': {'S3Uri': 's3://dsoaws-amazon-reviews/data/amazon_reviews_us_Digital_Software_v1_00_header.csv', 'LocalPath': '/opt/ml/processing/input/baseline_dataset_input', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]\n",
      "Outputs:  [{'OutputName': 'monitoring_output', 'S3Output': {'S3Uri': 's3://dsoaws-amazon-reviews/data/baseline/', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]\n",
      "...................\u001b[34m2020-05-04 09:40:26,448 - __main__ - INFO - All params:{'ProcessingJobArn': 'arn:aws:sagemaker:us-west-2:806570384721:processing-job/baseline-suggestion-job-2020-05-04-09-37-30-762', 'ProcessingJobName': 'baseline-suggestion-job-2020-05-04-09-37-30-762', 'Environment': {'dataset_format': '{\"csv\": {\"header\": true, \"output_columns_position\": \"START\"}}', 'dataset_source': '/opt/ml/processing/input/baseline_dataset_input', 'output_path': '/opt/ml/processing/output', 'publish_cloudwatch_metrics': 'Disabled'}, 'AppSpecification': {'ImageUri': '159807026194.dkr.ecr.us-west-2.amazonaws.com/sagemaker-model-monitor-analyzer', 'ContainerEntrypoint': None, 'ContainerArguments': None}, 'ProcessingInputs': [{'InputName': 'baseline_dataset_input', 'S3Input': {'LocalPath': '/opt/ml/processing/input/baseline_dataset_input', 'S3Uri': 's3://dsoaws-amazon-reviews/data/amazon_reviews_us_Digital_Software_v1_00_header.csv', 'S3DataDistributionType': 'FullyReplicated', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3CompressionType': 'None', 'S3DownloadMode': 'StartOfJob'}}], 'ProcessingOutputConfig': {'Outputs': [{'OutputName': 'monitoring_output', 'S3Output': {'LocalPath': '/opt/ml/processing/output', 'S3Uri': 's3://dsoaws-amazon-reviews/data/baseline/', 'S3UploadMode': 'EndOfJob'}}], 'KmsKeyId': None}, 'ProcessingResources': {'ClusterConfig': {'InstanceCount': 1, 'InstanceType': 'ml.m5.xlarge', 'VolumeSizeInGB': 20, 'VolumeKmsKeyId': None}}, 'RoleArn': 'arn:aws:iam::806570384721:role/service-role/AmazonSageMaker-ExecutionRole-20191201T115647', 'StoppingCondition': {'MaxRuntimeInSeconds': 3600}}\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,448 - __main__ - INFO - Current Environment:{'dataset_format': '{\"csv\": {\"header\": true, \"output_columns_position\": \"START\"}}', 'dataset_source': '/opt/ml/processing/input/baseline_dataset_input', 'output_path': '/opt/ml/processing/output', 'publish_cloudwatch_metrics': 'Disabled'}\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,449 - DefaultDataAnalyzer - INFO - Performing analysis with input: {\"dataset_source\": \"/opt/ml/processing/input/baseline_dataset_input\", \"dataset_format\": {\"csv\": {\"header\": true, \"output_columns_position\": \"START\"}}, \"record_preprocessor_script\": null, \"post_analytics_processor_script\": null, \"baseline_constraints\": null, \"baseline_statistics\": null, \"output_path\": \"/opt/ml/processing/output\", \"start_time\": null, \"end_time\": null, \"cloudwatch_metrics_directory\": \"/opt/ml/output/metrics/cloudwatch\", \"publish_cloudwatch_metrics\": \"Disabled\", \"sagemaker_endpoint_name\": null, \"sagemaker_monitoring_schedule_name\": null, \"output_message_file\": \"/opt/ml/output/message\"}\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,449 - DefaultDataAnalyzer - INFO - Bootstrapping yarn\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,449 - bootstrap - INFO - Copy aws jars\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,503 - bootstrap - INFO - Copy cluster config\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,503 - bootstrap - INFO - Write runtime cluster config\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,504 - bootstrap - INFO - Resource Config is: {'current_host': 'algo-1', 'hosts': ['algo-1']}\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,512 - bootstrap - INFO - Finished Yarn configuration files setup.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,512 - bootstrap - INFO - Starting spark process for master node algo-1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,512 - bootstrap - INFO - Running command: /usr/hadoop-3.0.0/bin/hdfs namenode -format -force\u001b[0m\n",
      "\u001b[34mWARNING: /usr/hadoop-3.0.0/logs does not exist. Creating.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,977 INFO namenode.NameNode: STARTUP_MSG: \u001b[0m\n",
      "\u001b[34m/************************************************************\u001b[0m\n",
      "\u001b[34mSTARTUP_MSG: Starting NameNode\u001b[0m\n",
      "\u001b[34mSTARTUP_MSG:   host = algo-1/10.0.235.37\u001b[0m\n",
      "\u001b[34mSTARTUP_MSG:   args = [-format, -force]\u001b[0m\n",
      "\u001b[34mSTARTUP_MSG:   version = 3.0.0\u001b[0m\n",
      "\u001b[34mSTARTUP_MSG:   classpath = /usr/hadoop-3.0.0/etc/hadoop:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-io-2.4.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/asm-5.0.4.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/json-smart-2.3.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/xz-1.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/re2j-1.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/zookeeper-3.4.9.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/metrics-core-3.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/hadoop-annotations-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-lang3-3.4.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/avro-1.7.7.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/junit-4.11.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/hadoop-auth-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/gson-2.2.4.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/jackson-core-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/hadoop-aws-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/lib/aws-java-sdk-bundle-1.11.199.jar:/usr/hadoop-3.0.0/share/hadoop/common/hadoop-nfs-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/hadoop-common-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/common/hadoop-common-3.0.0-tests.jar:/usr/hadoop-3.0.0/share/hadoop/common/hadoop-kms-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-net-3.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/xz-1.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/hadoop-annotations-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/hadoop-auth-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.0.0-tests.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-client-3.0.0-tests.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-3.0.0-tests.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-client-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/hdfs/hadoop-hdfs-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-tests.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/guice-4.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-hadoop-compat-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jsp-api-2.1-6.1.14.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/findbugs-annotations-1.3.9-1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jsp-2.1-6.1.14.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-common-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/metrics-core-2.2.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/commons-el-1.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/servlet-api-2.5-6.1.14.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-annotations-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jasper-compiler-5.5.23.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/joni-2.1.2.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-hadoop2-compat-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jcodings-1.0.8.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-server-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/disruptor-3.3.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/fst-2.50.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jasper-runtime-5.5.23.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-client-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/jamon-runtime-2.4.1.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/commons-csv-1.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-protocol-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-procedure-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/lib/hbase-prefix-tree-1.2.6.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-timelineservice-hbase-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-client-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-timelineservice-3.0.0.jar:/usr/hadoop-3\u001b[0m\n",
      "\u001b[34m.0.0/share/hadoop/yarn/hadoop-yarn-api-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-common-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-common-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-timelineservice-hbase-tests-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-registry-3.0.0.jar:/usr/hadoop-3.0.0/share/hadoop/yarn/hadoop-yarn-server-router-3.0.0.jar\u001b[0m\n",
      "\u001b[34mSTARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r c25427ceca461ee979d30edd7a4b0f50718e6533; compiled by 'andrew' on 2017-12-08T19:16Z\u001b[0m\n",
      "\u001b[34mSTARTUP_MSG:   java = 1.8.0_242\u001b[0m\n",
      "\u001b[34m************************************************************/\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,986 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:26,989 INFO namenode.NameNode: createNameNode [-format, -force]\u001b[0m\n",
      "\u001b[34mFormatting using clusterid: CID-8dfe5148-cc9e-4ea1-ab96-e9841137835c\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,471 INFO namenode.FSEditLog: Edit logging is async:true\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,482 INFO namenode.FSNamesystem: KeyProvider: null\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,484 INFO namenode.FSNamesystem: fsLock is fair: true\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,486 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,493 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,493 INFO namenode.FSNamesystem: supergroup          = supergroup\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,493 INFO namenode.FSNamesystem: isPermissionEnabled = true\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,493 INFO namenode.FSNamesystem: HA Enabled: false\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,525 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,536 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,536 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,544 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,544 INFO blockmanagement.BlockManager: The block deletion will start around 2020 May 04 09:40:27\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,546 INFO util.GSet: Computing capacity for map BlocksMap\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,546 INFO util.GSet: VM type       = 64-bit\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,547 INFO util.GSet: 2.0% max memory 3.1 GB = 63.5 MB\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,547 INFO util.GSet: capacity      = 2^23 = 8388608 entries\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,582 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManager: defaultReplication         = 3\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManager: maxReplication             = 512\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManager: minReplication             = 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManager: encryptDataTransfer        = false\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,586 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,615 INFO util.GSet: Computing capacity for map INodeMap\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,615 INFO util.GSet: VM type       = 64-bit\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,615 INFO util.GSet: 1.0% max memory 3.1 GB = 31.8 MB\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,615 INFO util.GSet: capacity      = 2^22 = 4194304 entries\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,617 INFO namenode.FSDirectory: ACLs enabled? false\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,617 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,617 INFO namenode.FSDirectory: XAttrs enabled? true\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,617 INFO namenode.NameNode: Caching file names occurring more than 10 times\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,621 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,625 INFO util.GSet: Computing capacity for map cachedBlocks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,625 INFO util.GSet: VM type       = 64-bit\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,625 INFO util.GSet: 0.25% max memory 3.1 GB = 7.9 MB\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,625 INFO util.GSet: capacity      = 2^20 = 1048576 entries\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,662 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,662 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,662 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,665 INFO namenode.FSNamesystem: Retry cache on namenode is enabled\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,665 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,667 INFO util.GSet: Computing capacity for map NameNodeRetryCache\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,667 INFO util.GSet: VM type       = 64-bit\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,667 INFO util.GSet: 0.029999999329447746% max memory 3.1 GB = 976.0 KB\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,667 INFO util.GSet: capacity      = 2^17 = 131072 entries\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,687 INFO namenode.FSImage: Allocated new BlockPoolId: BP-2069675197-10.0.235.37-1588585227681\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,699 INFO common.Storage: Storage directory /opt/amazon/hadoop/hdfs/namenode has been successfully formatted.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,706 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/amazon/hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using no compression\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,786 INFO namenode.FSImageFormatProtobuf: Image file /opt/amazon/hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size 389 bytes saved in 0 seconds.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,797 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,800 INFO namenode.NameNode: SHUTDOWN_MSG: \u001b[0m\n",
      "\u001b[34m/************************************************************\u001b[0m\n",
      "\u001b[34mSHUTDOWN_MSG: Shutting down NameNode at algo-1/10.0.235.37\u001b[0m\n",
      "\u001b[34m************************************************************/\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:27,812 - bootstrap - INFO - Running command: /usr/hadoop-3.0.0/bin/hdfs --daemon start namenode\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:29,860 - bootstrap - INFO - Failed to run /usr/hadoop-3.0.0/bin/hdfs --daemon start namenode, return code 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:29,860 - bootstrap - INFO - Running command: /usr/hadoop-3.0.0/bin/hdfs --daemon start datanode\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:31,915 - bootstrap - INFO - Failed to run /usr/hadoop-3.0.0/bin/hdfs --daemon start datanode, return code 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:31,916 - bootstrap - INFO - Running command: /usr/hadoop-3.0.0/bin/yarn --daemon start resourcemanager\u001b[0m\n",
      "\u001b[34mWARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.\u001b[0m\n",
      "\u001b[34mWARNING: /var/log/yarn/ does not exist. Creating.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:33,985 - bootstrap - INFO - Failed to run /usr/hadoop-3.0.0/bin/yarn --daemon start resourcemanager, return code 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:33,986 - bootstrap - INFO - Running command: /usr/hadoop-3.0.0/bin/yarn --daemon start nodemanager\u001b[0m\n",
      "\u001b[34mWARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:36,053 - bootstrap - INFO - Failed to run /usr/hadoop-3.0.0/bin/yarn --daemon start nodemanager, return code 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:36,053 - bootstrap - INFO - Running command: /usr/hadoop-3.0.0/bin/yarn --daemon start proxyserver\u001b[0m\n",
      "\u001b[34mWARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of YARN_LOG_DIR.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:38,148 - bootstrap - INFO - Failed to run /usr/hadoop-3.0.0/bin/yarn --daemon start proxyserver, return code 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:38,149 - DefaultDataAnalyzer - INFO - Total number of hosts in the cluster: 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:48,155 - DefaultDataAnalyzer - INFO - Running command: bin/spark-submit --master yarn --deploy-mode client --conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider --conf spark.serializer=org.apache.spark.serializer.KryoSerializer /opt/amazon/sagemaker-data-analyzer-1.0-jar-with-dependencies.jar --analytics_input /tmp/spark_job_config.json\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 WARN  NativeCodeLoader:60 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  Main:28 - Start analyzing with args: --analytics_input /tmp/spark_job_config.json\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  Main:31 - Analytics input path: DataAnalyzerParams(/tmp/spark_job_config.json)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  FileUtil:66 - Read file from path /tmp/spark_job_config.json.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  DataAnalyzer:30 - Analytics input: AnalyticsInput(/opt/ml/processing/input/baseline_dataset_input,DataSetFormat(Some(CSV(Some(true),None,None,None,None)),None,None),None,None,None,None,/opt/ml/processing/output,None,None,/opt/ml/output/metrics/cloudwatch,Some(Disabled),None,None,/opt/ml/output/message)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SparkContext:54 - Running Spark version 2.3.1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SparkContext:54 - Submitted application: SageMakerDataAnalyzer\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SecurityManager:54 - Changing view acls to: root\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SecurityManager:54 - Changing modify acls to: root\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SecurityManager:54 - Changing view acls groups to: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SecurityManager:54 - Changing modify acls groups to: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  Utils:54 - Successfully started service 'sparkDriver' on port 40861.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SparkEnv:54 - Registering MapOutputTracker\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:49 INFO  SparkEnv:54 - Registering BlockManagerMaster\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:50 INFO  BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:50 INFO  BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:50 INFO  DiskBlockManager:54 - Created local directory at /tmp/blockmgr-b603a56c-84da-48d2-89a3-2ca14362924a\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:50 INFO  MemoryStore:54 - MemoryStore started with capacity 1458.6 MB\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:50 INFO  SparkEnv:54 - Registering OutputCommitCoordinator\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:50 INFO  SparkContext:54 - Added JAR file:/opt/amazon/sagemaker-data-analyzer-1.0-jar-with-dependencies.jar at spark://10.0.235.37:40861/jars/sagemaker-data-analyzer-1.0-jar-with-dependencies.jar with timestamp 1588585250137\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  RMProxy:133 - Connecting to ResourceManager at /10.0.235.37:8032\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  Client:54 - Requesting a new application from cluster with 1 NodeManagers\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  Configuration:2636 - resource-types.xml not found\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  ResourceUtils:427 - Unable to find 'resource-types.xml'.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (15578 MB per container)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  Client:54 - Will allocate AM container, with 896 MB memory including 384 MB overhead\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  Client:54 - Setting up container launch context for our AM\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  Client:54 - Setting up the launch environment for our AM container\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:51 INFO  Client:54 - Preparing resources for our AM container\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:52 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:53 INFO  Client:54 - Uploading resource file:/tmp/spark-749c04b7-69fb-41bb-a627-beeed06224bc/__spark_libs__1160634204003305447.zip -> hdfs://10.0.235.37/user/root/.sparkStaging/application_1588585233353_0001/__spark_libs__1160634204003305447.zip\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:54 INFO  Client:54 - Uploading resource file:/tmp/spark-749c04b7-69fb-41bb-a627-beeed06224bc/__spark_conf__8859372440951570081.zip -> hdfs://10.0.235.37/user/root/.sparkStaging/application_1588585233353_0001/__spark_conf__.zip\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:54 INFO  SecurityManager:54 - Changing view acls to: root\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:54 INFO  SecurityManager:54 - Changing modify acls to: root\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:54 INFO  SecurityManager:54 - Changing view acls groups to: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:54 INFO  SecurityManager:54 - Changing modify acls groups to: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:54 INFO  SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:54 INFO  Client:54 - Submitting application application_1588585233353_0001 to ResourceManager\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:55 INFO  YarnClientImpl:310 - Submitted application application_1588585233353_0001\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:55 INFO  SchedulerExtensionServices:54 - Starting Yarn extension services with app application_1588585233353_0001 and attemptId None\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:56 INFO  Client:54 - Application report for application_1588585233353_0001 (state: ACCEPTED)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:56 INFO  Client:54 - \u001b[0m\n",
      "\u001b[34m#011 client token: N/A\u001b[0m\n",
      "\u001b[34m#011 diagnostics: AM container is launched, waiting for AM container to Register with RM\u001b[0m\n",
      "\u001b[34m#011 ApplicationMaster host: N/A\u001b[0m\n",
      "\u001b[34m#011 ApplicationMaster RPC port: -1\u001b[0m\n",
      "\u001b[34m#011 queue: default\u001b[0m\n",
      "\u001b[34m#011 start time: 1588585254937\u001b[0m\n",
      "\u001b[34m#011 final status: UNDEFINED\u001b[0m\n",
      "\u001b[34m#011 tracking URL: http://algo-1:8088/proxy/application_1588585233353_0001/\u001b[0m\n",
      "\u001b[34m#011 user: root\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:57 INFO  Client:54 - Application report for application_1588585233353_0001 (state: ACCEPTED)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:58 INFO  Client:54 - Application report for application_1588585233353_0001 (state: ACCEPTED)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:59 INFO  Client:54 - Application report for application_1588585233353_0001 (state: ACCEPTED)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:59 INFO  YarnClientSchedulerBackend:54 - Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> algo-1, PROXY_URI_BASES -> http://algo-1:8088/proxy/application_1588585233353_0001), /proxy/application_1588585233353_0001\u001b[0m\n",
      "\u001b[34m2020-05-04 09:40:59 INFO  YarnSchedulerBackend$YarnSchedulerEndpoint:54 - ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  Client:54 - Application report for application_1588585233353_0001 (state: RUNNING)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  Client:54 - \u001b[0m\n",
      "\u001b[34m#011 client token: N/A\u001b[0m\n",
      "\u001b[34m#011 diagnostics: N/A\u001b[0m\n",
      "\u001b[34m#011 ApplicationMaster host: 10.0.235.37\u001b[0m\n",
      "\u001b[34m#011 ApplicationMaster RPC port: 0\u001b[0m\n",
      "\u001b[34m#011 queue: default\u001b[0m\n",
      "\u001b[34m#011 start time: 1588585254937\u001b[0m\n",
      "\u001b[34m#011 final status: UNDEFINED\u001b[0m\n",
      "\u001b[34m#011 tracking URL: http://algo-1:8088/proxy/application_1588585233353_0001/\u001b[0m\n",
      "\u001b[34m#011 user: root\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  YarnClientSchedulerBackend:54 - Application application_1588585233353_0001 has started running.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  Utils:54 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41627.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  NettyBlockTransferService:54 - Server created on 10.0.235.37:41627\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  BlockManager:54 - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, 10.0.235.37, 41627, None)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  BlockManagerMasterEndpoint:54 - Registering block manager 10.0.235.37:41627 with 1458.6 MB RAM, BlockManagerId(driver, 10.0.235.37, 41627, None)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 10.0.235.37, 41627, None)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 10.0.235.37, 41627, None)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:00 INFO  log:192 - Logging initialized @11844ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:02 INFO  YarnSchedulerBackend$YarnDriverEndpoint:54 - Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.0.235.37:60696) with ID 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:02 INFO  BlockManagerMasterEndpoint:54 - Registering block manager algo-1:42955 with 5.8 GB RAM, BlockManagerId(1, algo-1, 42955, None)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:20 INFO  YarnClientSchedulerBackend:54 - SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:20 WARN  SparkContext:66 - Spark is not running in local mode, therefore the checkpoint directory must not be on the local filesystem. Directory '/tmp' appears to be on the local filesystem.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:20 INFO  SharedState:54 - Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/usr/spark-2.3.1/spark-warehouse').\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:20 INFO  SharedState:54 - Warehouse path is 'file:/usr/spark-2.3.1/spark-warehouse'.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:20 INFO  StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:21 INFO  DatasetReader:90 - Files to process:List(file:///opt/ml/processing/input/baseline_dataset_input/amazon_reviews_us_Digital_Software_v1_00_header.csv)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:21 INFO  FileSourceStrategy:54 - Pruning directories with: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:21 INFO  FileSourceStrategy:54 - Post-Scan Filters: (length(trim(value#0, None)) > 0)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:21 INFO  FileSourceStrategy:54 - Output Data Schema: struct<value: string>\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:21 INFO  FileSourceScanExec:54 - Pushed Filters: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  CodeGenerator:54 - Code generated in 160.615295 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  CodeGenerator:54 - Code generated in 24.674146 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  MemoryStore:54 - Block broadcast_0 stored as values in memory (estimated size 429.5 KB, free 1458.2 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  MemoryStore:54 - Block broadcast_0_piece0 stored as bytes in memory (estimated size 38.2 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on 10.0.235.37:41627 (size: 38.2 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  SparkContext:54 - Created broadcast 0 from csv at DatasetReader.scala:51\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  FileSourceScanExec:54 - Planning scan with bin packing, max size: 6452969 bytes, open cost is considered as scanning 4194304 bytes.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  SparkContext:54 - Starting job: csv at DatasetReader.scala:51\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  DAGScheduler:54 - Got job 0 (csv at DatasetReader.scala:51) with 1 output partitions\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  DAGScheduler:54 - Final stage: ResultStage 0 (csv at DatasetReader.scala:51)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  DAGScheduler:54 - Parents of final stage: List()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  DAGScheduler:54 - Missing parents: List()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  DAGScheduler:54 - Submitting ResultStage 0 (MapPartitionsRDD[5] at csv at DatasetReader.scala:51), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  MemoryStore:54 - Block broadcast_1 stored as values in memory (estimated size 9.4 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  MemoryStore:54 - Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.5 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  BlockManagerInfo:54 - Added broadcast_1_piece0 in memory on 10.0.235.37:41627 (size: 4.5 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  SparkContext:54 - Created broadcast 1 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[5] at csv at DatasetReader.scala:51) (first 15 tasks are for partitions Vector(0))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  YarnScheduler:54 - Adding task set 0.0 with 1 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:22 INFO  TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8382 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:23 INFO  BlockManagerInfo:54 - Added broadcast_1_piece0 in memory on algo-1:42955 (size: 4.5 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:23 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on algo-1:42955 (size: 38.2 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  TaskSetManager:54 - Finished task 0.0 in stage 0.0 (TID 0) in 1672 ms on algo-1 (executor 1) (1/1)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  YarnScheduler:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - ResultStage 0 (csv at DatasetReader.scala:51) finished in 1.739 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - Job 0 finished: csv at DatasetReader.scala:51, took 1.783494 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceStrategy:54 - Pruning directories with: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceStrategy:54 - Post-Scan Filters: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceStrategy:54 - Output Data Schema: struct<value: string>\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceScanExec:54 - Pushed Filters: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  CodeGenerator:54 - Code generated in 8.993834 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  MemoryStore:54 - Block broadcast_2 stored as values in memory (estimated size 429.5 KB, free 1457.7 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  MemoryStore:54 - Block broadcast_2_piece0 stored as bytes in memory (estimated size 38.2 KB, free 1457.7 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Added broadcast_2_piece0 in memory on 10.0.235.37:41627 (size: 38.2 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  SparkContext:54 - Created broadcast 2 from csv at DatasetReader.scala:51\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceScanExec:54 - Planning scan with bin packing, max size: 6452969 bytes, open cost is considered as scanning 4194304 bytes.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceStrategy:54 - Pruning directories with: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceStrategy:54 - Post-Scan Filters: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceStrategy:54 - Output Data Schema: struct<star_rating: string, review_body: string>\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceScanExec:54 - Pushed Filters: \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  MemoryStore:54 - Block broadcast_3 stored as values in memory (estimated size 429.5 KB, free 1457.3 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  MemoryStore:54 - Block broadcast_3_piece0 stored as bytes in memory (estimated size 38.2 KB, free 1457.2 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Added broadcast_3_piece0 in memory on 10.0.235.37:41627 (size: 38.2 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 23\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 3\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 2\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 17\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  SparkContext:54 - Created broadcast 3 from cache at DataAnalyzer.scala:90\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  FileSourceScanExec:54 - Planning scan with bin packing, max size: 6452969 bytes, open cost is considered as scanning 4194304 bytes.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Removed broadcast_0_piece0 on algo-1:42955 in memory (size: 38.2 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Removed broadcast_0_piece0 on 10.0.235.37:41627 in memory (size: 38.2 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 16\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 35\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 7\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 9\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 29\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 26\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 30\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 32\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 19\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 27\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Removed broadcast_1_piece0 on 10.0.235.37:41627 in memory (size: 4.5 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Removed broadcast_1_piece0 on algo-1:42955 in memory (size: 4.5 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 6\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 33\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 4\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 24\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 31\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 8\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 18\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 14\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 12\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 0\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 20\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 10\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 25\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 5\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 11\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 13\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 34\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 21\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 22\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Removed broadcast_2_piece0 on 10.0.235.37:41627 in memory (size: 38.2 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 15\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  ContextCleaner:54 - Cleaned accumulator 28\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  CodeGenerator:54 - Code generated in 12.25276 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  CodeGenerator:54 - Code generated in 13.037791 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  SparkContext:54 - Starting job: head at DataAnalyzer.scala:93\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - Got job 1 (head at DataAnalyzer.scala:93) with 1 output partitions\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - Final stage: ResultStage 1 (head at DataAnalyzer.scala:93)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - Parents of final stage: List()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - Missing parents: List()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - Submitting ResultStage 1 (MapPartitionsRDD[19] at head at DataAnalyzer.scala:93), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  MemoryStore:54 - Block broadcast_4 stored as values in memory (estimated size 17.6 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  MemoryStore:54 - Block broadcast_4_piece0 stored as bytes in memory (estimated size 8.3 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Added broadcast_4_piece0 in memory on 10.0.235.37:41627 (size: 8.3 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  SparkContext:54 - Created broadcast 4 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[19] at head at DataAnalyzer.scala:93) (first 15 tasks are for partitions Vector(0))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  YarnScheduler:54 - Adding task set 1.0 with 1 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  TaskSetManager:54 - Starting task 0.0 in stage 1.0 (TID 1, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8382 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Added broadcast_4_piece0 in memory on algo-1:42955 (size: 8.3 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:24 INFO  BlockManagerInfo:54 - Added broadcast_3_piece0 in memory on algo-1:42955 (size: 38.2 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  BlockManagerInfo:54 - Added rdd_13_0 in memory on algo-1:42955 (size: 5.7 MB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  TaskSetManager:54 - Finished task 0.0 in stage 1.0 (TID 1) in 851 ms on algo-1 (executor 1) (1/1)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  YarnScheduler:54 - Removed TaskSet 1.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - ResultStage 1 (head at DataAnalyzer.scala:93) finished in 0.884 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Job 1 finished: head at DataAnalyzer.scala:93, took 0.889222 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 WARN  Utils:66 - Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  CodeGenerator:54 - Code generated in 14.270828 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  SparkContext:54 - Starting job: collect at AnalysisRunner.scala:313\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Registering RDD 24 (collect at AnalysisRunner.scala:313)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Got job 2 (collect at AnalysisRunner.scala:313) with 1 output partitions\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Final stage: ResultStage 3 (collect at AnalysisRunner.scala:313)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Parents of final stage: List(ShuffleMapStage 2)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Missing parents: List(ShuffleMapStage 2)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Submitting ShuffleMapStage 2 (MapPartitionsRDD[24] at collect at AnalysisRunner.scala:313), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  MemoryStore:54 - Block broadcast_5 stored as values in memory (estimated size 38.2 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  MemoryStore:54 - Block broadcast_5_piece0 stored as bytes in memory (estimated size 15.7 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  BlockManagerInfo:54 - Added broadcast_5_piece0 in memory on 10.0.235.37:41627 (size: 15.7 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  SparkContext:54 - Created broadcast 5 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  DAGScheduler:54 - Submitting 3 missing tasks from ShuffleMapStage 2 (MapPartitionsRDD[24] at collect at AnalysisRunner.scala:313) (first 15 tasks are for partitions Vector(0, 1, 2))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  YarnScheduler:54 - Adding task set 2.0 with 3 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  TaskSetManager:54 - Starting task 0.0 in stage 2.0 (TID 2, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  TaskSetManager:54 - Starting task 1.0 in stage 2.0 (TID 3, algo-1, executor 1, partition 1, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  TaskSetManager:54 - Starting task 2.0 in stage 2.0 (TID 4, algo-1, executor 1, partition 2, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:25 INFO  BlockManagerInfo:54 - Added broadcast_5_piece0 in memory on algo-1:42955 (size: 15.7 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:26 INFO  BlockManagerInfo:54 - Added rdd_13_2 in memory on algo-1:42955 (size: 1982.2 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:26 INFO  BlockManagerInfo:54 - Added rdd_13_1 in memory on algo-1:42955 (size: 5.6 MB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Finished task 1.0 in stage 2.0 (TID 3) in 1284 ms on algo-1 (executor 1) (1/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Finished task 2.0 in stage 2.0 (TID 4) in 1286 ms on algo-1 (executor 1) (2/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Finished task 0.0 in stage 2.0 (TID 2) in 1301 ms on algo-1 (executor 1) (3/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  YarnScheduler:54 - Removed TaskSet 2.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - ShuffleMapStage 2 (collect at AnalysisRunner.scala:313) finished in 1.319 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - looking for newly runnable stages\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - running: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - waiting: Set(ResultStage 3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - failed: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Submitting ResultStage 3 (MapPartitionsRDD[27] at collect at AnalysisRunner.scala:313), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  MemoryStore:54 - Block broadcast_6 stored as values in memory (estimated size 44.4 KB, free 1458.0 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  MemoryStore:54 - Block broadcast_6_piece0 stored as bytes in memory (estimated size 17.5 KB, free 1458.0 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  BlockManagerInfo:54 - Added broadcast_6_piece0 in memory on 10.0.235.37:41627 (size: 17.5 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  SparkContext:54 - Created broadcast 6 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 3 (MapPartitionsRDD[27] at collect at AnalysisRunner.scala:313) (first 15 tasks are for partitions Vector(0))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  YarnScheduler:54 - Adding task set 3.0 with 1 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Starting task 0.0 in stage 3.0 (TID 5, algo-1, executor 1, partition 0, NODE_LOCAL, 7765 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  BlockManagerInfo:54 - Added broadcast_6_piece0 in memory on algo-1:42955 (size: 17.5 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  MapOutputTrackerMasterEndpoint:54 - Asked to send map output locations for shuffle 0 to 10.0.235.37:60696\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Finished task 0.0 in stage 3.0 (TID 5) in 207 ms on algo-1 (executor 1) (1/1)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  YarnScheduler:54 - Removed TaskSet 3.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - ResultStage 3 (collect at AnalysisRunner.scala:313) finished in 0.222 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Job 2 finished: collect at AnalysisRunner.scala:313, took 1.565759 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  CodeGenerator:54 - Code generated in 13.708472 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  SparkContext:54 - Starting job: treeReduce at KLLRunner.scala:107\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Got job 3 (treeReduce at KLLRunner.scala:107) with 3 output partitions\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Final stage: ResultStage 4 (treeReduce at KLLRunner.scala:107)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Parents of final stage: List()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Missing parents: List()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Submitting ResultStage 4 (MapPartitionsRDD[36] at treeReduce at KLLRunner.scala:107), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  MemoryStore:54 - Block broadcast_7 stored as values in memory (estimated size 21.3 KB, free 1458.0 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  MemoryStore:54 - Block broadcast_7_piece0 stored as bytes in memory (estimated size 9.9 KB, free 1458.0 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  BlockManagerInfo:54 - Added broadcast_7_piece0 in memory on 10.0.235.37:41627 (size: 9.9 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  SparkContext:54 - Created broadcast 7 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  DAGScheduler:54 - Submitting 3 missing tasks from ResultStage 4 (MapPartitionsRDD[36] at treeReduce at KLLRunner.scala:107) (first 15 tasks are for partitions Vector(0, 1, 2))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  YarnScheduler:54 - Adding task set 4.0 with 3 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Starting task 0.0 in stage 4.0 (TID 6, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8382 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Starting task 1.0 in stage 4.0 (TID 7, algo-1, executor 1, partition 1, PROCESS_LOCAL, 8382 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  TaskSetManager:54 - Starting task 2.0 in stage 4.0 (TID 8, algo-1, executor 1, partition 2, PROCESS_LOCAL, 8382 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:27 INFO  BlockManagerInfo:54 - Added broadcast_7_piece0 in memory on algo-1:42955 (size: 9.9 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 2.0 in stage 4.0 (TID 8) in 1429 ms on algo-1 (executor 1) (1/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 1.0 in stage 4.0 (TID 7) in 1459 ms on algo-1 (executor 1) (2/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 0.0 in stage 4.0 (TID 6) in 1541 ms on algo-1 (executor 1) (3/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  YarnScheduler:54 - Removed TaskSet 4.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - ResultStage 4 (treeReduce at KLLRunner.scala:107) finished in 1.554 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Job 3 finished: treeReduce at KLLRunner.scala:107, took 1.573417 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  CodeGenerator:54 - Code generated in 22.3933 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  CodeGenerator:54 - Code generated in 37.340081 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  CodeGenerator:54 - Code generated in 29.934194 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  SparkContext:54 - Starting job: collect at AnalysisRunner.scala:313\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Registering RDD 41 (collect at AnalysisRunner.scala:313)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Got job 4 (collect at AnalysisRunner.scala:313) with 1 output partitions\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Final stage: ResultStage 6 (collect at AnalysisRunner.scala:313)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Parents of final stage: List(ShuffleMapStage 5)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Missing parents: List(ShuffleMapStage 5)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Submitting ShuffleMapStage 5 (MapPartitionsRDD[41] at collect at AnalysisRunner.scala:313), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  MemoryStore:54 - Block broadcast_8 stored as values in memory (estimated size 27.9 KB, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  MemoryStore:54 - Block broadcast_8_piece0 stored as bytes in memory (estimated size 11.8 KB, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  BlockManagerInfo:54 - Added broadcast_8_piece0 in memory on 10.0.235.37:41627 (size: 11.8 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  SparkContext:54 - Created broadcast 8 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Submitting 3 missing tasks from ShuffleMapStage 5 (MapPartitionsRDD[41] at collect at AnalysisRunner.scala:313) (first 15 tasks are for partitions Vector(0, 1, 2))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  YarnScheduler:54 - Adding task set 5.0 with 3 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Starting task 0.0 in stage 5.0 (TID 9, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Starting task 1.0 in stage 5.0 (TID 10, algo-1, executor 1, partition 1, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Starting task 2.0 in stage 5.0 (TID 11, algo-1, executor 1, partition 2, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  BlockManagerInfo:54 - Added broadcast_8_piece0 in memory on algo-1:42955 (size: 11.8 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 0.0 in stage 5.0 (TID 9) in 109 ms on algo-1 (executor 1) (1/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 1.0 in stage 5.0 (TID 10) in 113 ms on algo-1 (executor 1) (2/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 2.0 in stage 5.0 (TID 11) in 113 ms on algo-1 (executor 1) (3/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  YarnScheduler:54 - Removed TaskSet 5.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - ShuffleMapStage 5 (collect at AnalysisRunner.scala:313) finished in 0.133 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - looking for newly runnable stages\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - running: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - waiting: Set(ResultStage 6)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - failed: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Submitting ResultStage 6 (MapPartitionsRDD[44] at collect at AnalysisRunner.scala:313), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  MemoryStore:54 - Block broadcast_9 stored as values in memory (estimated size 16.4 KB, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  MemoryStore:54 - Block broadcast_9_piece0 stored as bytes in memory (estimated size 6.8 KB, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  BlockManagerInfo:54 - Added broadcast_9_piece0 in memory on 10.0.235.37:41627 (size: 6.8 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  SparkContext:54 - Created broadcast 9 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 6 (MapPartitionsRDD[44] at collect at AnalysisRunner.scala:313) (first 15 tasks are for partitions Vector(0))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  YarnScheduler:54 - Adding task set 6.0 with 1 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Starting task 0.0 in stage 6.0 (TID 12, algo-1, executor 1, partition 0, NODE_LOCAL, 7765 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  BlockManagerInfo:54 - Added broadcast_9_piece0 in memory on algo-1:42955 (size: 6.8 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  MapOutputTrackerMasterEndpoint:54 - Asked to send map output locations for shuffle 1 to 10.0.235.37:60696\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 0.0 in stage 6.0 (TID 12) in 63 ms on algo-1 (executor 1) (1/1)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - ResultStage 6 (collect at AnalysisRunner.scala:313) finished in 0.074 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  YarnScheduler:54 - Removed TaskSet 6.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Job 4 finished: collect at AnalysisRunner.scala:313, took 0.213698 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  SparkContext:54 - Starting job: countByKey at ColumnProfiler.scala:566\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Registering RDD 51 (countByKey at ColumnProfiler.scala:566)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Got job 5 (countByKey at ColumnProfiler.scala:566) with 3 output partitions\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Final stage: ResultStage 8 (countByKey at ColumnProfiler.scala:566)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Parents of final stage: List(ShuffleMapStage 7)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Missing parents: List(ShuffleMapStage 7)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Submitting ShuffleMapStage 7 (MapPartitionsRDD[51] at countByKey at ColumnProfiler.scala:566), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  MemoryStore:54 - Block broadcast_10 stored as values in memory (estimated size 17.7 KB, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  MemoryStore:54 - Block broadcast_10_piece0 stored as bytes in memory (estimated size 9.0 KB, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  BlockManagerInfo:54 - Added broadcast_10_piece0 in memory on 10.0.235.37:41627 (size: 9.0 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  SparkContext:54 - Created broadcast 10 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  DAGScheduler:54 - Submitting 3 missing tasks from ShuffleMapStage 7 (MapPartitionsRDD[51] at countByKey at ColumnProfiler.scala:566) (first 15 tasks are for partitions Vector(0, 1, 2))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  YarnScheduler:54 - Adding task set 7.0 with 3 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Starting task 0.0 in stage 7.0 (TID 13, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Starting task 1.0 in stage 7.0 (TID 14, algo-1, executor 1, partition 1, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Starting task 2.0 in stage 7.0 (TID 15, algo-1, executor 1, partition 2, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  BlockManagerInfo:54 - Added broadcast_10_piece0 in memory on algo-1:42955 (size: 9.0 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 2.0 in stage 7.0 (TID 15) in 212 ms on algo-1 (executor 1) (1/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:29 INFO  TaskSetManager:54 - Finished task 1.0 in stage 7.0 (TID 14) in 241 ms on algo-1 (executor 1) (2/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 0.0 in stage 7.0 (TID 13) in 263 ms on algo-1 (executor 1) (3/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnScheduler:54 - Removed TaskSet 7.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - ShuffleMapStage 7 (countByKey at ColumnProfiler.scala:566) finished in 0.281 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - looking for newly runnable stages\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - running: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - waiting: Set(ResultStage 8)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - failed: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Submitting ResultStage 8 (ShuffledRDD[52] at countByKey at ColumnProfiler.scala:566), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MemoryStore:54 - Block broadcast_11 stored as values in memory (estimated size 3.2 KB, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MemoryStore:54 - Block broadcast_11_piece0 stored as bytes in memory (estimated size 1925.0 B, free 1457.9 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Added broadcast_11_piece0 in memory on 10.0.235.37:41627 (size: 1925.0 B, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  SparkContext:54 - Created broadcast 11 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Submitting 3 missing tasks from ResultStage 8 (ShuffledRDD[52] at countByKey at ColumnProfiler.scala:566) (first 15 tasks are for partitions Vector(0, 1, 2))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnScheduler:54 - Adding task set 8.0 with 3 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Starting task 0.0 in stage 8.0 (TID 16, algo-1, executor 1, partition 0, NODE_LOCAL, 7660 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Starting task 1.0 in stage 8.0 (TID 17, algo-1, executor 1, partition 1, NODE_LOCAL, 7660 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Starting task 2.0 in stage 8.0 (TID 18, algo-1, executor 1, partition 2, NODE_LOCAL, 7660 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Added broadcast_11_piece0 in memory on algo-1:42955 (size: 1925.0 B, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MapOutputTrackerMasterEndpoint:54 - Asked to send map output locations for shuffle 2 to 10.0.235.37:60696\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 1.0 in stage 8.0 (TID 17) in 56 ms on algo-1 (executor 1) (1/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 2.0 in stage 8.0 (TID 18) in 62 ms on algo-1 (executor 1) (2/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 0.0 in stage 8.0 (TID 16) in 65 ms on algo-1 (executor 1) (3/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnScheduler:54 - Removed TaskSet 8.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - ResultStage 8 (countByKey at ColumnProfiler.scala:566) finished in 0.076 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Job 5 finished: countByKey at ColumnProfiler.scala:566, took 0.371359 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ConstraintGenerator:46 - Generating Constraints:\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 113\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 242\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 189\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 92\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 78\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 84\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 170\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 60\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 112\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 180\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 167\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 276\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 138\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 151\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 77\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 249\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 109\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 108\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 175\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 169\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 150\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 254\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 264\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 191\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_10_piece0 on 10.0.235.37:41627 in memory (size: 9.0 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_10_piece0 on algo-1:42955 in memory (size: 9.0 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 91\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 274\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 51\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 184\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 230\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 243\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 83\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 199\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 67\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 148\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 58\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 212\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 271\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 190\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 154\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 131\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 183\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 73\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 107\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 182\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 87\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 79\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 86\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 179\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 104\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 155\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned shuffle 1\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 235\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 160\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 110\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 124\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 250\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 273\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 263\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 125\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 126\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 253\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 247\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_4_piece0 on 10.0.235.37:41627 in memory (size: 8.3 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_4_piece0 on algo-1:42955 in memory (size: 8.3 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 143\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 96\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 88\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 262\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 111\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 47\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 48\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 203\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 56\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 97\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 128\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 267\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 44\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 70\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 105\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 166\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 225\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 239\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 127\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 185\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 152\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 53\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 164\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 115\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 238\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 252\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 207\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 229\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 171\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 202\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 74\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 209\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 255\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 102\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 176\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 278\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 268\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 240\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 178\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 216\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 103\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 224\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 168\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 236\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 85\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 101\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 201\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 173\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 65\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 237\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 188\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 98\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 218\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 133\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 221\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_6_piece0 on 10.0.235.37:41627 in memory (size: 17.5 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_6_piece0 on algo-1:42955 in memory (size: 17.5 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 93\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 194\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 140\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 122\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 159\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 139\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 266\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 129\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 215\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 90\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 42\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 141\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 261\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 82\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 234\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 71\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 95\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 275\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 69\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 149\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 217\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 220\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 49\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 145\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 211\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 222\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 231\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 272\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 269\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 146\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 162\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 258\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 130\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_8_piece0 on algo-1:42955 in memory (size: 11.8 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_8_piece0 on 10.0.235.37:41627 in memory (size: 11.8 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 45\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 186\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 75\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 80\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 106\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 121\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 59\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 68\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 157\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 251\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 117\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 158\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 72\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ConstraintGenerator:51 - Constraints: {\n",
      "  \"version\" : 0.0,\n",
      "  \"features\" : [ {\n",
      "    \"name\" : \"star_rating\",\n",
      "    \"inferred_type\" : \"Integral\",\n",
      "    \"completeness\" : 1.0,\n",
      "    \"num_constraints\" : {\n",
      "      \"is_non_negative\" : true\n",
      "    }\n",
      "  }, {\n",
      "    \"name\" : \"review_body\",\n",
      "    \"inferred_type\" : \"String\",\n",
      "    \"completeness\" : 1.0\n",
      "  } ],\n",
      "  \"monitoring_config\" : {\n",
      "    \"evaluate_constraints\" : \"Enabled\",\n",
      "    \"emit_metrics\" : \"Enabled\",\n",
      "    \"datatype_check_threshold\" : 1.0,\n",
      "    \"domain_content_threshold\" : 1.0,\n",
      "    \"distribution_constraints\" : {\n",
      "      \"perform_comparison\" : \"Enabled\",\n",
      "      \"comparison_threshold\" : 0.1,\n",
      "      \"comparison_method\" : \"Robust\"\n",
      "    }\n",
      "  }\u001b[0m\n",
      "\u001b[34m}\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  FileUtil:29 - Write to file constraints.json at path /opt/ml/processing/output.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  StatsGenerator:67 - Generating Stats:\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_5_piece0 on algo-1:42955 in memory (size: 15.7 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_5_piece0 on 10.0.235.37:41627 in memory (size: 15.7 KB, free: 1458.5 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 66\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 219\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_7_piece0 on 10.0.235.37:41627 in memory (size: 9.9 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_7_piece0 on algo-1:42955 in memory (size: 9.9 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 213\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 241\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned shuffle 2\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 50\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 177\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 196\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 197\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 232\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 94\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 161\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 147\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 174\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 54\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 233\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 210\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 81\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 214\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 198\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 260\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 132\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 57\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 181\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 223\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 265\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 52\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 123\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 135\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 195\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 99\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 153\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 245\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 277\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 63\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 163\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 156\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 192\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 187\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 205\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 142\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 114\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 134\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 259\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 119\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 61\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 165\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 120\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 100\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 46\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 193\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 43\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 116\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 256\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 76\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 144\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 246\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 248\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 64\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 204\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 244\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 226\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 118\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_9_piece0 on 10.0.235.37:41627 in memory (size: 6.8 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_9_piece0 on algo-1:42955 in memory (size: 6.8 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 89\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 62\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 257\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 270\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 208\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 136\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_11_piece0 on 10.0.235.37:41627 in memory (size: 1925.0 B, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Removed broadcast_11_piece0 on algo-1:42955 in memory (size: 1925.0 B, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 55\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 172\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 206\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned shuffle 0\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 200\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ContextCleaner:54 - Cleaned accumulator 137\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  CodeGenerator:54 - Code generated in 19.697026 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  CodeGenerator:54 - Code generated in 9.607673 ms\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  SparkContext:54 - Starting job: count at StatsGenerator.scala:69\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Registering RDD 57 (count at StatsGenerator.scala:69)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Got job 6 (count at StatsGenerator.scala:69) with 1 output partitions\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Final stage: ResultStage 10 (count at StatsGenerator.scala:69)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Parents of final stage: List(ShuffleMapStage 9)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Missing parents: List(ShuffleMapStage 9)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Submitting ShuffleMapStage 9 (MapPartitionsRDD[57] at count at StatsGenerator.scala:69), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MemoryStore:54 - Block broadcast_12 stored as values in memory (estimated size 18.8 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MemoryStore:54 - Block broadcast_12_piece0 stored as bytes in memory (estimated size 9.0 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Added broadcast_12_piece0 in memory on 10.0.235.37:41627 (size: 9.0 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  SparkContext:54 - Created broadcast 12 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Submitting 3 missing tasks from ShuffleMapStage 9 (MapPartitionsRDD[57] at count at StatsGenerator.scala:69) (first 15 tasks are for partitions Vector(0, 1, 2))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnScheduler:54 - Adding task set 9.0 with 3 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Starting task 0.0 in stage 9.0 (TID 19, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Starting task 1.0 in stage 9.0 (TID 20, algo-1, executor 1, partition 1, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Starting task 2.0 in stage 9.0 (TID 21, algo-1, executor 1, partition 2, PROCESS_LOCAL, 8371 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Added broadcast_12_piece0 in memory on algo-1:42955 (size: 9.0 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 1.0 in stage 9.0 (TID 20) in 69 ms on algo-1 (executor 1) (1/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 0.0 in stage 9.0 (TID 19) in 75 ms on algo-1 (executor 1) (2/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 2.0 in stage 9.0 (TID 21) in 75 ms on algo-1 (executor 1) (3/3)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnScheduler:54 - Removed TaskSet 9.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - ShuffleMapStage 9 (count at StatsGenerator.scala:69) finished in 0.086 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - looking for newly runnable stages\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - running: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - waiting: Set(ResultStage 10)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - failed: Set()\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Submitting ResultStage 10 (MapPartitionsRDD[60] at count at StatsGenerator.scala:69), which has no missing parents\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MemoryStore:54 - Block broadcast_13 stored as values in memory (estimated size 7.4 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MemoryStore:54 - Block broadcast_13_piece0 stored as bytes in memory (estimated size 3.8 KB, free 1458.1 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Added broadcast_13_piece0 in memory on 10.0.235.37:41627 (size: 3.8 KB, free: 1458.6 MB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  SparkContext:54 - Created broadcast 13 from broadcast at DAGScheduler.scala:1039\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 10 (MapPartitionsRDD[60] at count at StatsGenerator.scala:69) (first 15 tasks are for partitions Vector(0))\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnScheduler:54 - Adding task set 10.0 with 1 tasks\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Starting task 0.0 in stage 10.0 (TID 22, algo-1, executor 1, partition 0, NODE_LOCAL, 7765 bytes)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerInfo:54 - Added broadcast_13_piece0 in memory on algo-1:42955 (size: 3.8 KB, free: 5.8 GB)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MapOutputTrackerMasterEndpoint:54 - Asked to send map output locations for shuffle 3 to 10.0.235.37:60696\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  TaskSetManager:54 - Finished task 0.0 in stage 10.0 (TID 22) in 39 ms on algo-1 (executor 1) (1/1)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnScheduler:54 - Removed TaskSet 10.0, whose tasks have all completed, from pool \u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - ResultStage 10 (count at StatsGenerator.scala:69) finished in 0.049 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  DAGScheduler:54 - Job 6 finished: count at StatsGenerator.scala:69, took 0.141113 s\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  StatsGenerator:72 - Stats: {\n",
      "  \"version\" : 0.0,\n",
      "  \"dataset\" : {\n",
      "    \"item_count\" : 34450\n",
      "  },\n",
      "  \"features\" : [ {\n",
      "    \"name\" : \"star_rating\",\n",
      "    \"inferred_type\" : \"Integral\",\n",
      "    \"numerical_statistics\" : {\n",
      "      \"common\" : {\n",
      "        \"num_present\" : 34450,\n",
      "        \"num_missing\" : 0\n",
      "      },\n",
      "      \"mean\" : 3.0,\n",
      "      \"sum\" : 103350.0,\n",
      "      \"std_dev\" : 1.4142135623731031,\n",
      "      \"min\" : 1.0,\n",
      "      \"max\" : 5.0,\n",
      "      \"distribution\" : {\n",
      "        \"kll\" : {\n",
      "          \"buckets\" : [ {\n",
      "            \"lower_bound\" : 1.0,\n",
      "            \"upper_bound\" : 1.4,\n",
      "            \"count\" : 6897.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 1.4,\n",
      "            \"upper_bound\" : 1.8,\n",
      "            \"count\" : 0.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 1.8,\n",
      "            \"upper_bound\" : 2.2,\n",
      "            \"count\" : 6880.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 2.2,\n",
      "            \"upper_bound\" : 2.6,\n",
      "            \"count\" : 0.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 2.6,\n",
      "            \"upper_bound\" : 3.0,\n",
      "            \"count\" : 0.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 3.0,\n",
      "            \"upper_bound\" : 3.4,\n",
      "            \"count\" : 6893.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 3.4,\n",
      "            \"upper_bound\" : 3.8,\n",
      "            \"count\" : 0.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 3.8,\n",
      "            \"upper_bound\" : 4.2,\n",
      "            \"count\" : 6892.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 4.2,\n",
      "            \"upper_bound\" : 4.6,\n",
      "            \"count\" : 0.0\n",
      "          }, {\n",
      "            \"lower_bound\" : 4.6,\n",
      "            \"upper_bound\" : 5.0,\n",
      "            \"count\" : 6888.0\n",
      "          } ],\n",
      "          \"sketch\" : {\n",
      "            \"parameters\" : {\n",
      "              \"c\" : 0.64,\n",
      "              \"k\" : 2048.0\n",
      "            },\n",
      "            \"data\" : [ [ 1.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0 ], [ ], [ 4.0 ], [ 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 5.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0 ], [ 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.\u001b[0m\n",
      "\u001b[34m0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0 ] ]\n",
      "          }\n",
      "        }\n",
      "      }\n",
      "    }\n",
      "  }, {\n",
      "    \"name\" : \"review_body\",\n",
      "    \"inferred_type\" : \"String\",\n",
      "    \"string_statistics\" : {\n",
      "      \"common\" : {\n",
      "        \"num_present\" : 34450,\n",
      "        \"num_missing\" : 0\n",
      "      },\n",
      "      \"distinct_count\" : 32882.0\n",
      "    }\n",
      "  } ]\u001b[0m\n",
      "\u001b[34m}\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  FileUtil:29 - Write to file statistics.json at path /opt/ml/processing/output.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnClientSchedulerBackend:54 - Interrupting monitor thread\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnClientSchedulerBackend:54 - Shutting down all executors\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnSchedulerBackend$YarnDriverEndpoint:54 - Asking each executor to shut down\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  SchedulerExtensionServices:54 - Stopping SchedulerExtensionServices\u001b[0m\n",
      "\u001b[34m(serviceOption=None,\n",
      " services=List(),\n",
      " started=false)\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  YarnClientSchedulerBackend:54 - Stopped\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  MemoryStore:54 - MemoryStore cleared\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManager:54 - BlockManager stopped\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  BlockManagerMaster:54 - BlockManagerMaster stopped\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  SparkContext:54 - Successfully stopped SparkContext\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  Main:47 - Completed: Job completed successfully with no violations.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  Main:115 - Write to file /opt/ml/output/message.\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ShutdownHookManager:54 - Shutdown hook called\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-31228d85-84f1-4a23-8a1a-7d3fa453e063\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:30 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-749c04b7-69fb-41bb-a627-beeed06224bc\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:31,012 - DefaultDataAnalyzer - INFO - Completed spark-submit with return code : 0\u001b[0m\n",
      "\u001b[34m2020-05-04 09:41:31,012 - DefaultDataAnalyzer - INFO - Spark job completed.\u001b[0m\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<sagemaker.processing.ProcessingJob at 0x7fc02dc45650>"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sagemaker.model_monitor import DefaultModelMonitor\n",
    "from sagemaker.model_monitor.dataset_format import DatasetFormat\n",
    "from sagemaker import get_execution_role\n",
    "\n",
    "role = get_execution_role()\n",
    "\n",
    "my_default_monitor = DefaultModelMonitor(\n",
    "    role=role,\n",
    "    instance_count=1,\n",
    "    instance_type='ml.m5.xlarge',\n",
    "    volume_size_in_gb=20,\n",
    "    max_runtime_in_seconds=3600,\n",
    ")\n",
    "\n",
    "my_default_monitor.suggest_baseline(\n",
    "    baseline_dataset=baseline_data_uri,\n",
    "    dataset_format=DatasetFormat.csv(header=True),\n",
    "    output_s3_uri=baseline_results_uri,\n",
    "    wait=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Explore the generated constraints and statistics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead\n",
      "  after removing the cwd from sys.path.\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>name</th>\n",
       "      <th>inferred_type</th>\n",
       "      <th>numerical_statistics.common.num_present</th>\n",
       "      <th>numerical_statistics.common.num_missing</th>\n",
       "      <th>numerical_statistics.mean</th>\n",
       "      <th>numerical_statistics.sum</th>\n",
       "      <th>numerical_statistics.std_dev</th>\n",
       "      <th>numerical_statistics.min</th>\n",
       "      <th>numerical_statistics.max</th>\n",
       "      <th>numerical_statistics.distribution.kll.buckets</th>\n",
       "      <th>numerical_statistics.distribution.kll.sketch.parameters.c</th>\n",
       "      <th>numerical_statistics.distribution.kll.sketch.parameters.k</th>\n",
       "      <th>numerical_statistics.distribution.kll.sketch.data</th>\n",
       "      <th>string_statistics.common.num_present</th>\n",
       "      <th>string_statistics.common.num_missing</th>\n",
       "      <th>string_statistics.distinct_count</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>star_rating</td>\n",
       "      <td>Integral</td>\n",
       "      <td>34450.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>3.0</td>\n",
       "      <td>103350.0</td>\n",
       "      <td>1.414214</td>\n",
       "      <td>1.0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>[{'lower_bound': 1.0, 'upper_bound': 1.4, 'cou...</td>\n",
       "      <td>0.64</td>\n",
       "      <td>2048.0</td>\n",
       "      <td>[[1.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0,...</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>review_body</td>\n",
       "      <td>String</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>34450.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>32882.0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "          name inferred_type  numerical_statistics.common.num_present  \\\n",
       "0  star_rating      Integral                                  34450.0   \n",
       "1  review_body        String                                      NaN   \n",
       "\n",
       "   numerical_statistics.common.num_missing  numerical_statistics.mean  \\\n",
       "0                                      0.0                        3.0   \n",
       "1                                      NaN                        NaN   \n",
       "\n",
       "   numerical_statistics.sum  numerical_statistics.std_dev  \\\n",
       "0                  103350.0                      1.414214   \n",
       "1                       NaN                           NaN   \n",
       "\n",
       "   numerical_statistics.min  numerical_statistics.max  \\\n",
       "0                       1.0                       5.0   \n",
       "1                       NaN                       NaN   \n",
       "\n",
       "       numerical_statistics.distribution.kll.buckets  \\\n",
       "0  [{'lower_bound': 1.0, 'upper_bound': 1.4, 'cou...   \n",
       "1                                                NaN   \n",
       "\n",
       "   numerical_statistics.distribution.kll.sketch.parameters.c  \\\n",
       "0                                               0.64           \n",
       "1                                                NaN           \n",
       "\n",
       "   numerical_statistics.distribution.kll.sketch.parameters.k  \\\n",
       "0                                             2048.0           \n",
       "1                                                NaN           \n",
       "\n",
       "   numerical_statistics.distribution.kll.sketch.data  \\\n",
       "0  [[1.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0,...   \n",
       "1                                                NaN   \n",
       "\n",
       "   string_statistics.common.num_present  string_statistics.common.num_missing  \\\n",
       "0                                   NaN                                   NaN   \n",
       "1                               34450.0                                   0.0   \n",
       "\n",
       "   string_statistics.distinct_count  \n",
       "0                               NaN  \n",
       "1                           32882.0  "
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "baseline_job = my_default_monitor.latest_baselining_job\n",
    "schema_df = pd.io.json.json_normalize(baseline_job.baseline_statistics().body_dict[\"features\"])\n",
    "schema_df.head(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead\n",
      "  \"\"\"Entry point for launching an IPython kernel.\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>name</th>\n",
       "      <th>inferred_type</th>\n",
       "      <th>completeness</th>\n",
       "      <th>num_constraints.is_non_negative</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>star_rating</td>\n",
       "      <td>Integral</td>\n",
       "      <td>1.0</td>\n",
       "      <td>True</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>review_body</td>\n",
       "      <td>String</td>\n",
       "      <td>1.0</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "          name inferred_type  completeness num_constraints.is_non_negative\n",
       "0  star_rating      Integral           1.0                            True\n",
       "1  review_body        String           1.0                             NaN"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "constraints_df = pd.io.json.json_normalize(baseline_job.suggested_constraints().body_dict[\"features\"])\n",
    "constraints_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before proceeding to enable monitoring, you could chose to edit the constraint file as required to fine tune the constraints."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Step 3: Enable continous monitoring\n",
    "\n",
    "We have collected the data above, here we proceed to analyze and monitor the data with MonitoringSchedules."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Create a schedule"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We are ready to create a model monitoring schedule for the Endpoint created earlier with the baseline resources (constraints and statistics)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Creating Monitoring Schedule with name: reviews-monitor\n"
     ]
    }
   ],
   "source": [
    "from sagemaker.model_monitor import CronExpressionGenerator\n",
    "from time import gmtime, strftime\n",
    "\n",
    "mon_schedule_name = 'reviews-monitor'\n",
    "s3_report_path = 's3://dsoaws-amazon-reviews/data/monitoring-reports/'\n",
    "my_default_monitor.create_monitoring_schedule(\n",
    "    monitor_schedule_name=mon_schedule_name,\n",
    "    endpoint_input=predictor.endpoint,\n",
    "    output_s3_uri=s3_report_path,\n",
    "    statistics=my_default_monitor.baseline_statistics(),\n",
    "    constraints=my_default_monitor.suggested_constraints(),\n",
    "    schedule_cron_expression=CronExpressionGenerator.daily(),\n",
    "    enable_cloudwatch_metrics=True,\n",
    "\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Schedule status: Scheduled\n"
     ]
    }
   ],
   "source": [
    "desc_schedule_result = my_default_monitor.describe_schedule()\n",
    "print('Schedule status: {}'.format(desc_schedule_result['MonitoringScheduleStatus']))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### All set\n",
    "Now that your monitoring schedule has been created. Please return to the Amazon SageMaker Studio to list the executions for this Schedule and observe the results going forward."
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "instance_type": "ml.t3.medium",
  "kernelspec": {
   "display_name": "conda_python3",
   "language": "python",
   "name": "conda_python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  },
  "notice": "Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.  Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
