text
stringlengths
0
4.99k
Epoch 23/30
112/112 - 3s - loss: 2.8464e-07 - fn: 1.0000 - fp: 4131.0000 - tn: 223298.0000 - tp: 416.0000 - precision: 0.0915 - recall: 0.9976 - val_loss: 0.0097 - val_fn: 10.0000 - val_fp: 191.0000 - val_tn: 56695.0000 - val_tp: 65.0000 - val_precision: 0.2539 - val_recall: 0.8667
Epoch 24/30
112/112 - 3s - loss: 3.2445e-07 - fn: 3.0000 - fp: 4040.0000 - tn: 223389.0000 - tp: 414.0000 - precision: 0.0930 - recall: 0.9928 - val_loss: 0.0129 - val_fn: 9.0000 - val_fp: 278.0000 - val_tn: 56608.0000 - val_tp: 66.0000 - val_precision: 0.1919 - val_recall: 0.8800
Epoch 25/30
112/112 - 3s - loss: 5.4032e-07 - fn: 4.0000 - fp: 4834.0000 - tn: 222595.0000 - tp: 413.0000 - precision: 0.0787 - recall: 0.9904 - val_loss: 0.1334 - val_fn: 7.0000 - val_fp: 885.0000 - val_tn: 56001.0000 - val_tp: 68.0000 - val_precision: 0.0714 - val_recall: 0.9067
Epoch 26/30
112/112 - 3s - loss: 1.2099e-06 - fn: 9.0000 - fp: 5767.0000 - tn: 221662.0000 - tp: 408.0000 - precision: 0.0661 - recall: 0.9784 - val_loss: 0.0426 - val_fn: 11.0000 - val_fp: 211.0000 - val_tn: 56675.0000 - val_tp: 64.0000 - val_precision: 0.2327 - val_recall: 0.8533
Epoch 27/30
112/112 - 2s - loss: 5.0924e-07 - fn: 7.0000 - fp: 4185.0000 - tn: 223244.0000 - tp: 410.0000 - precision: 0.0892 - recall: 0.9832 - val_loss: 0.0345 - val_fn: 6.0000 - val_fp: 710.0000 - val_tn: 56176.0000 - val_tp: 69.0000 - val_precision: 0.0886 - val_recall: 0.9200
Epoch 28/30
112/112 - 3s - loss: 4.9177e-07 - fn: 7.0000 - fp: 3871.0000 - tn: 223558.0000 - tp: 410.0000 - precision: 0.0958 - recall: 0.9832 - val_loss: 0.0631 - val_fn: 7.0000 - val_fp: 912.0000 - val_tn: 55974.0000 - val_tp: 68.0000 - val_precision: 0.0694 - val_recall: 0.9067
Epoch 29/30
112/112 - 3s - loss: 1.8390e-06 - fn: 9.0000 - fp: 7199.0000 - tn: 220230.0000 - tp: 408.0000 - precision: 0.0536 - recall: 0.9784 - val_loss: 0.0661 - val_fn: 10.0000 - val_fp: 292.0000 - val_tn: 56594.0000 - val_tp: 65.0000 - val_precision: 0.1821 - val_recall: 0.8667
Epoch 30/30
112/112 - 3s - loss: 3.5976e-06 - fn: 14.0000 - fp: 5541.0000 - tn: 221888.0000 - tp: 403.0000 - precision: 0.0678 - recall: 0.9664 - val_loss: 0.1205 - val_fn: 10.0000 - val_fp: 206.0000 - val_tn: 56680.0000 - val_tp: 65.0000 - val_precision: 0.2399 - val_recall: 0.8667
<tensorflow.python.keras.callbacks.History at 0x16ab3d310>
Conclusions
At the end of training, out of 56,961 validation transactions, we are:
Correctly identifying 66 of them as fraudulent
Missing 9 fraudulent transactions
At the cost of incorrectly flagging 441 legitimate transactions
In the real world, one would put an even higher weight on class 1, so as to reflect that False Negatives are more costly than False Positives.
Next time your credit card gets declined in an online purchase -- this is why.
Binary classification of structured data including numerical and categorical features.
Introduction
This example demonstrates how to do structured data classification, starting from a raw CSV file. Our data includes both numerical and categorical features. We will use Keras preprocessing layers to normalize the numerical features and vectorize the categorical ones.
Note that this example should be run with TensorFlow 2.5 or higher.
The dataset
Our dataset is provided by the Cleveland Clinic Foundation for Heart Disease. It's a CSV file with 303 rows. Each row contains information about a patient (a sample), and each column describes an attribute of the patient (a feature). We use the features to predict whether a patient has a heart disease (binary classification).
Here's the description of each feature:
Column Description Feature Type
Age Age in years Numerical
Sex (1 = male; 0 = female) Categorical
CP Chest pain type (0, 1, 2, 3, 4) Categorical
Trestbpd Resting blood pressure (in mm Hg on admission) Numerical
Chol Serum cholesterol in mg/dl Numerical
FBS fasting blood sugar in 120 mg/dl (1 = true; 0 = false) Categorical
RestECG Resting electrocardiogram results (0, 1, 2) Categorical
Thalach Maximum heart rate achieved Numerical
Exang Exercise induced angina (1 = yes; 0 = no) Categorical
Oldpeak ST depression induced by exercise relative to rest Numerical
Slope Slope of the peak exercise ST segment Numerical
CA Number of major vessels (0-3) colored by fluoroscopy Both numerical & categorical
Thal 3 = normal; 6 = fixed defect; 7 = reversible defect Categorical
Target Diagnosis of heart disease (1 = true; 0 = false) Target
Setup
import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow import keras
from tensorflow.keras import layers
Preparing the data
Let's download the data and load it into a Pandas dataframe:
file_url = \"http://storage.googleapis.com/download.tensorflow.org/data/heart.csv\"
dataframe = pd.read_csv(file_url)
The dataset includes 303 samples with 14 columns per sample (13 features, plus the target label):
dataframe.shape
(303, 14)
Here's a preview of a few samples:
dataframe.head()
age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal target
0 63 1 1 145 233 1 2 150 0 2.3 3 0 fixed 0
1 67 1 4 160 286 0 2 108 1 1.5 2 3 normal 1
2 67 1 4 120 229 0 2 129 1 2.6 2 2 reversible 0
3 37 1 3 130 250 0 0 187 0 3.5 3 0 normal 0
4 41 0 2 130 204 0 2 172 0 1.4 1 0 normal 0
The last column, \"target\", indicates whether the patient has a heart disease (1) or not (0).
Let's split the data into a training and validation set:
val_dataframe = dataframe.sample(frac=0.2, random_state=1337)
train_dataframe = dataframe.drop(val_dataframe.index)
print(
\"Using %d samples for training and %d for validation\"
% (len(train_dataframe), len(val_dataframe))
)
Using 242 samples for training and 61 for validation
Let's generate tf.data.Dataset objects for each dataframe:
def dataframe_to_dataset(dataframe):
dataframe = dataframe.copy()
labels = dataframe.pop(\"target\")
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds