geninhu commited on
Commit
e60f3af
1 Parent(s): cdd97ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -12
README.md CHANGED
@@ -5,24 +5,29 @@ tags:
5
  - imbalanced-classification
6
  ---
7
 
8
- ## Model description
9
-
10
- More information needed
 
11
 
12
  ## Intended uses & limitations
 
13
 
14
- More information needed
15
-
16
- ## Training and evaluation data
17
-
18
- More information needed
19
 
20
  ## Training procedure
21
-
22
- ### Training hyperparameters
23
-
24
  The following hyperparameters were used during training:
25
- - optimizer: {'name': 'Adam', 'learning_rate': 0.01, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
 
 
 
 
 
 
 
26
  - training_precision: float32
27
 
28
  ## Training Metrics
@@ -59,6 +64,7 @@ The following hyperparameters were used during training:
59
  | 28| 0.0| 9.0| 5305.0| 222124.0| 408.0| 0.071| 0.978| 0.026| 9.0| 398.0| 56488.0| 66.0| 0.142| 0.88|
60
  | 29| 0.0| 5.0| 4846.0| 222583.0| 412.0| 0.078| 0.988| 0.242| 6.0| 7883.0| 49003.0| 69.0| 0.009| 0.92|
61
  | 30| 0.0| 5.0| 5193.0| 222236.0| 412.0| 0.074| 0.988| 0.026| 7.0| 449.0| 56437.0| 68.0| 0.132| 0.907|
 
62
  ## Model Plot
63
 
64
  <details>
5
  - imbalanced-classification
6
  ---
7
 
8
+ ## Model Description
9
+ ### Keras Implementation of Imbalanced classification: credit card fraud detection
10
+ This repo contains the trained model of [Imbalanced classification: credit card fraud detection](https://keras.io/examples/structured_data/imbalanced_classification/).
11
+ The full credit goes to: [fchollet](https://twitter.com/fchollet)
12
 
13
  ## Intended uses & limitations
14
+ - The trained model is used to detect of a specific transaction is fraudulent or not.
15
 
16
+ ## Training dataset
17
+ - [Credit Card Fraud Detection](https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud)
18
+ - Due to the high imbalance of the target feature (417 frauds or 0.18% of total 284,807 samples), training weight was applied to reduce the False Negatives to the lowest level as possible.
 
 
19
 
20
  ## Training procedure
21
+ ### Training hyperparameter
 
 
22
  The following hyperparameters were used during training:
23
+ - optimizer: 'Adam'
24
+ - learning_rate: 0.01
25
+ - loss: 'binary_crossentropy'
26
+ - epochs: 30
27
+ - batch_size: 2048
28
+ - beta_1: 0.9
29
+ - beta_2: 0.999
30
+ - epsilon: 1e-07
31
  - training_precision: float32
32
 
33
  ## Training Metrics
64
  | 28| 0.0| 9.0| 5305.0| 222124.0| 408.0| 0.071| 0.978| 0.026| 9.0| 398.0| 56488.0| 66.0| 0.142| 0.88|
65
  | 29| 0.0| 5.0| 4846.0| 222583.0| 412.0| 0.078| 0.988| 0.242| 6.0| 7883.0| 49003.0| 69.0| 0.009| 0.92|
66
  | 30| 0.0| 5.0| 5193.0| 222236.0| 412.0| 0.074| 0.988| 0.026| 7.0| 449.0| 56437.0| 68.0| 0.132| 0.907|
67
+
68
  ## Model Plot
69
 
70
  <details>