{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "3f825d7c",
   "metadata": {},
   "source": [
    "## 易错点\n",
    "1. groupby('Class')后面无[]直接count"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "ce73b6b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['JAVA_HOME'] = \"D:\\\\Develop\\\\Java\\\\jdk1.8.0_241\" # 记得把地址改成自己的\n",
    "os.environ['PYSPARK_PYTHON']=\"C:\\\\Users\\\\Spencer Cheung\\\\.conda\\\\envs\\\\py37\\\\python.exe\"#本机电脑所使用的python编译器的地址"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5e0939a9",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyspark.context import SparkContext\n",
    "from pyspark.sql import SparkSession\n",
    "sc = SparkContext('local','test')\n",
    "from pyspark.sql.functions import explode\n",
    "from pyspark.sql.functions import split\n",
    "spark = SparkSession(sc)\n",
    "sc\n",
    "data_O = spark.read.load('./data/creditcard.csv',format='csv',header='true',inferSchema='true')\n",
    "type(data_O)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0cff433",
   "metadata": {},
   "source": [
    "1.data_0按Class列分组统计每个类别的个数，输出为classFreq，打印classFreq输出为:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d5d99f76",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#此处由考生填写\n",
    "classFreq = data_O.groupby('Class').count()\n",
    "#此处由考生填写\n",
    "classFreq.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cba1e2c5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 欺诈样本492行\n",
    "import pandas as pd\n",
    "data = data_O.toPandas()\n",
    "data = data.sample(frac=1)\n",
    "fraud_df = data.loc[data['Class'] == 1]\n",
    "non_fraud_df = data.loc[data['Class'] == 0][:492]\n",
    "normal_distributed_df = pd.concat([fraud_df,non_fraud_df])\n",
    "new_df = normal_distributed_df.sample(frac=1,random_state=42)\n",
    "new_df.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "dfdb4e75",
   "metadata": {},
   "outputs": [],
   "source": [
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt\n",
    "colors = ['#B3F9C5','#f9c5b3']"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c2dfb460",
   "metadata": {},
   "source": [
    "2.以Class为x轴，V10为y轴，绘制箱线图，用于查看new_df数据集中V10变量在不同班级之间的分布情况。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c2184721",
   "metadata": {},
   "outputs": [],
   "source": [
    "#此处由考生填写\n",
    "V10_sns = sns.boxplot(x=\"Class\",y=\"V10\",data=new_df)\n",
    "#此处由考生填写\n",
    "V10_sns.set_title('V10 vs Class Negative Correlation')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "f39113c2",
   "metadata": {},
   "outputs": [],
   "source": [
    "dfff = spark.createDataFrame(new_df)\n",
    "from pyspark.sql.functions import *\n",
    "from pyspark.sql.window import Window"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb2765f1",
   "metadata": {},
   "source": [
    "3.运用withColumn将dfff添加一列idx,该列是按窗口win中的Time字段排序后的行号row_number."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "30750abb",
   "metadata": {},
   "outputs": [],
   "source": [
    "win = Window().orderBy('Time')\n",
    "#此处由考生填写\n",
    "dfff = dfff.withColumn(\"idx\",row_number().over(win))\n",
    "#此处由考生填写\n",
    "from pyspark.ml import Pipeline\n",
    "from pyspark.ml.classification import GBTClassifier\n",
    "from pyspark.ml.feature import VectorIndexer,VectorAssembler\n",
    "from pyspark.ml.evaluation import BinaryClassificationEvaluator\n",
    "from pyspark.ml.linalg import DenseVector"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "883a6763",
   "metadata": {},
   "source": [
    "4.将dfff中的rdd映射为元组，元组中只有一个元素DenseVector,DenseVector的第一个值由dfff的前30列组成，第二个值为dfff的31列，第三个值为dfff的第32列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "eadf2367",
   "metadata": {},
   "outputs": [],
   "source": [
    "#此处由考生填写\n",
    "#无法联想\n",
    "training_df = dfff.rdd.map(lambda x: (DenseVector(x[0:29]),x[30],x[31]))\n",
    "#此处由考生填写"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ea72f62e",
   "metadata": {},
   "outputs": [],
   "source": [
    "training_df = spark.createDataFrame(training_df,[\"features\",\"label\",\"index\"])\n",
    "training_df = training_df.select(\"index\",\"features\",\"label\")\n",
    "# training_df.show(2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0f77894e",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_data,test_data = training_df.randomSplit([.8,.2],seed=1234)\n",
    "gbt = GBTClassifier(featuresCol=\"features\",maxIter=100,maxDepth=8)\n",
    "model =gbt.fit(train_data)\n",
    "predictions = model.transform(test_data)\n",
    "evaluator = BinaryClassificationEvaluator()\n",
    "evaluator.evaluate(predictions)\n",
    "tp = predictions[(predictions.label ==1) & (predictions.prediction ==1)].count()\n",
    "tn = predictions[(predictions.label ==0) & (predictions.prediction ==0)].count()\n",
    "fp = predictions[(predictions.label ==0) & (predictions.prediction ==1)].count()\n",
    "fn = predictions[(predictions.label ==1) & (predictions.prediction ==0)].count()\n",
    "print(tp,tn,fp,fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ee3aac1",
   "metadata": {},
   "source": [
    "5.打印召回率和精确率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c5d2ac2a",
   "metadata": {},
   "outputs": [],
   "source": [
    "#此处由考生填写\n",
    "print(\"Recall: \",tp/(tp+fn))\n",
    "print(\"Precision: \",tp/(tp+fp))\n",
    "#此处由考生填写"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PySpark-2.4.5",
   "language": "python",
   "name": "pyspark-2.4.5"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
