ryefoxlime
commited on
Commit
•
3801f04
1
Parent(s):
4827068
Push Keras model using huggingface_hub.
Browse files- README.md +19 -210
- fingerprint.pb +2 -2
- keras_metadata.pb +2 -2
- saved_model.pb +2 -2
- variables/variables.data-00000-of-00001 +2 -2
- variables/variables.index +0 -0
README.md
CHANGED
@@ -1,224 +1,33 @@
|
|
1 |
---
|
2 |
-
language:
|
3 |
-
- en
|
4 |
-
metrics:
|
5 |
-
- accuracy
|
6 |
library_name: keras
|
7 |
-
tags:
|
8 |
-
- medical
|
9 |
-
pipeline_tag: image-classification
|
10 |
---
|
11 |
-
# ryefoxlime/PneumoniaDetection
|
12 |
|
13 |
-
## Model
|
14 |
-
I have developed a robust model that utilizes transfer learning and the powerful ResNet50V2 architecture to effectively classify chest X-ray images into two categories: pneumonia and normal. This model demonstrates high accuracy and generalizability, making it a promising tool for assisting in pneumonia diagnosis.
|
15 |
|
16 |
-
|
17 |
-
The ResNet50V2:
|
18 |
-
ResNet50V2 is a deep convolutional neural network (CNN) architecture, part of the ResNet (Residual Networks) family. It's known for its depth, utilizing residual blocks that help address the vanishing gradient problem during training. The "V2" signifies an improvement over the original ResNet50, incorporating tweaks to enhance performance.
|
19 |
|
20 |
-
|
21 |
-
Transfer learning involves leveraging a pre-trained model's knowledge on a large dataset and applying it to a different but related task. For our use case, ResNet50V2, which has been trained on a diverse dataset, is adapted to classify pneumonia-related images.
|
22 |
|
23 |
-
|
24 |
-
The core task of the model is to categorize images into two classes: "affected by pneumonia" and "normal." This binary classification is crucial for diagnosing medical conditions based on visual information in images.
|
25 |
|
26 |
-
|
27 |
-
During training, the pre-trained ResNet50V2 model is used as a feature extractor. The existing weights are frozen, preventing further training on the original dataset, and a new classifier tailored to this specific task is added. This new classifier is trained using the labeled dataset of pneumonia and normal images.
|
28 |
|
29 |
-
|
30 |
-
To guide the training process, a loss function is employed to measure the difference between predicted and actual labels. Common choices for image classification tasks include categorical cross-entropy. An optimizer, such as stochastic gradient descent (SGD) or Adam. In our case we have used Adam as our optimier of choice, which is used to adjust the model's weights based on the calculated loss.
|
31 |
|
32 |
-
|
33 |
-
The model's performance is assessed using a separate dataset not seen during training. Metrics like accuracy, precision, recall, and F1-score are often used to gauge how well the model generalizes to new, unseen data.
|
34 |
|
35 |
-
|
36 |
-
Once the model demonstrates satisfactory performance, it can be deployed for real-world use. This involves integrating it into a system or application where it can receive new images, make predictions, and aid in the diagnosis of pneumonia.
|
37 |
|
|
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
- **Developed by:** [Nitin Kausik Remella](https://github.com/OkabeRintaro10)
|
41 |
-
- **Model type:** Sequential
|
42 |
-
- **Language(s):** Python
|
43 |
-
- **Finetuned from model:** ResNet50V2
|
44 |
-
|
45 |
-
### Model Sources [optional]
|
46 |
-
|
47 |
-
|
48 |
-
- **Repository:** [More Information Needed]
|
49 |
-
- **Paper [optional]:** [A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2
|
50 |
-
](https://pubmed.ncbi.nlm.nih.gov/32501424/)
|
51 |
-
- **Demo [optional]:** [More Information Needed]
|
52 |
-
|
53 |
-
## Uses
|
54 |
-
This tool is used to assist medical professional in cross-validation of the diagnosis
|
55 |
-
|
56 |
-
### Out-of-Scope Use
|
57 |
-
|
58 |
-
This model is in no form or way to replace an actual medical professional but only in assist them
|
59 |
-
|
60 |
-
|
61 |
-
## Bias, Risks, and Limitations
|
62 |
-
|
63 |
-
The model cant handle 4d images such as CT scans
|
64 |
-
|
65 |
-
## How to Get Started with the Model
|
66 |
-
|
67 |
-
```
|
68 |
-
import tensorflow as tf
|
69 |
-
from tensorflow import keras
|
70 |
-
from keras import models
|
71 |
-
model = load_model('/path/to/model')
|
72 |
-
model.evaluate('/path/to/image')
|
73 |
-
```
|
74 |
-
|
75 |
-
## Training Details
|
76 |
-
|
77 |
-
### Training Data
|
78 |
-
|
79 |
-
Downloading the dataset from [kaggle](https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia)
|
80 |
-
split the data into 3 parts
|
81 |
-
- train
|
82 |
-
- test
|
83 |
-
- val
|
84 |
-
|
85 |
-
code to split into 25% 75% split of training data
|
86 |
-
```
|
87 |
-
# Creating Val folder
|
88 |
-
os.chdir('datasets/chest_xray/chest_xray/')
|
89 |
-
if os.path.isdir('val/NORMAL') is False:
|
90 |
-
os.makedirs('val/NORMAL')
|
91 |
-
os.makedirs('val/PNEUMONIA')
|
92 |
-
|
93 |
-
# Moving Images from train folder to val folder
|
94 |
-
source = 'chest_xray/train/PNEUMONIA/'
|
95 |
-
dest = 'datasets/chest_xray/chest_xray/val/PNEUMONIA'
|
96 |
-
files = os.listdir(source)
|
97 |
-
np_of_files = len(files) // 25
|
98 |
-
for file_name in random.sample(files, np_of_files):
|
99 |
-
shutil.move(os.path.join(source, file_name), dest)
|
100 |
-
|
101 |
-
# Moving Normal Images from train folder to val folder
|
102 |
-
source = 'datasets/chest_xray/chest_xray/train/NORMAL/'
|
103 |
-
dest = 'datasets/chest_xray/chest_xray/val/NORMAL'
|
104 |
-
files = os.listdir(source)
|
105 |
-
np_of_files = len(files) // 25
|
106 |
-
for file_name in random.sample(files, np_of_files):
|
107 |
-
shutil.move(os.path.join(source, file_name), dest)
|
108 |
-
```
|
109 |
-
|
110 |
-
### Training Procedure
|
111 |
-
The training of the data requires ResNet50V2 to start as the base model and then using further layers to extract more information and to help in classification
|
112 |
-
|
113 |
-
#### Building the model
|
114 |
-
```
|
115 |
-
from keras.applications import VGG16, ResNet50V2
|
116 |
-
|
117 |
-
base_model = ResNet50V2(
|
118 |
-
include_top=False, input_shape=(224, 224, 3), weights="imagenet"
|
119 |
-
)
|
120 |
-
base_model.trainable = False
|
121 |
-
|
122 |
-
def CreateModel():
|
123 |
-
model = Sequential()
|
124 |
-
model.add(base_model)
|
125 |
-
# model.add(Conv2D(filters=32, kernel_size=3, strides=(2, 2)))
|
126 |
-
model.add(AveragePooling2D(pool_size=(2, 2), strides=2))
|
127 |
-
model.add(Flatten())
|
128 |
-
model.add(Dense(256, activation="relu"))
|
129 |
-
model.add(Dense(128, activation="relu"))
|
130 |
-
model.add(Dense(2, activation="softmax"))
|
131 |
-
model.compile(
|
132 |
-
loss="sparse_categorical_crossentropy",
|
133 |
-
optimizer=Adam(learning_rate=0.000035),
|
134 |
-
metrics=["sparse_categorical_accuracy"],
|
135 |
-
)
|
136 |
-
return model
|
137 |
-
```
|
138 |
-
#### Fitting the model
|
139 |
-
```
|
140 |
-
%%time
|
141 |
-
history = model.fit(
|
142 |
-
train_datagen,
|
143 |
-
steps_per_epoch = train_datagen.n//train_datagen.batch_size,
|
144 |
-
epochs = 10,
|
145 |
-
validation_data= val_datagen,
|
146 |
-
validation_steps= val_datagen.n//val_datagen.batch_size,
|
147 |
-
callbacks=[callback, reduceLR, checkpoint],
|
148 |
-
verbose = 1
|
149 |
-
)
|
150 |
-
```
|
151 |
-
#### Preprocessing
|
152 |
-
|
153 |
-
```
|
154 |
-
train_image_generator = ImageDataGenerator(
|
155 |
-
rotation_range= 0.5,
|
156 |
-
horizontal_flip=True,
|
157 |
-
vertical_flip=True,
|
158 |
-
zoom_range=0.5,
|
159 |
-
rescale= 1./255
|
160 |
-
)
|
161 |
-
|
162 |
-
train_datagen = train_image_generator.flow_from_directory(
|
163 |
-
train_dir,
|
164 |
-
target_size= (IMG_HEIGHT,IMG_WIDTH),
|
165 |
-
color_mode='rgb',
|
166 |
-
batch_size= batch_size,
|
167 |
-
class_mode= 'binary',
|
168 |
-
classes=['NORMAL','PNEUMONIA'],
|
169 |
-
shuffle= True,
|
170 |
-
seed= 42
|
171 |
-
)
|
172 |
-
```
|
173 |
-
set the value `shuffle=False` for val_datagen and test_datagen and change the value of `train_dir` to `val_dir` and 'test_dir' respectively
|
174 |
-
|
175 |
-
#### Training Hyperparameters
|
176 |
-
|
177 |
-
- **Training regime:**
|
178 |
-
- Using keras callbacks to reduce the load on the gpu/cpu by checking the model growth and early stopping or reducing the learning rate accordingly.
|
179 |
-
- Saving the best accuracy as a checkpoint to resume the training from
|
180 |
-
```
|
181 |
-
from keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
|
182 |
-
|
183 |
-
callback = EarlyStopping(
|
184 |
-
monitor="val_loss", patience=6, restore_best_weights=True, min_delta=0.03, verbose=2
|
185 |
-
)
|
186 |
-
reduceLR = ReduceLROnPlateau(
|
187 |
-
monitor="val_loss",
|
188 |
-
factor=0.01,
|
189 |
-
patience=2,
|
190 |
-
min_lr=0.000035,
|
191 |
-
min_delta=0.01,
|
192 |
-
verbose=2,
|
193 |
-
)
|
194 |
-
checkpoint = ModelCheckpoint(
|
195 |
-
filepath=f"../Checkpoints/{{val_sparse_categorical_accuracy:.2f}}",
|
196 |
-
save_weights_only=True,
|
197 |
-
monitor="val_sparse_categorical_accuracy",
|
198 |
-
mode="max",
|
199 |
-
save_best_only=True,
|
200 |
-
verbose=2,
|
201 |
-
initial_value_threshold= baseline
|
202 |
-
)
|
203 |
-
```
|
204 |
-
|
205 |
-
#### Define Defaults
|
206 |
-
|
207 |
-
Batch_size = 32 *smaller batch size for weaker systems*
|
208 |
-
IMG_HEIGHTS = 224
|
209 |
-
IMG_WEIGHTS = 224
|
210 |
-
epochs = 10
|
211 |
-
train_dir = path/to/chest_xray/train
|
212 |
-
val_dir = path/to/chest_xray/val
|
213 |
-
test_dir = path/to/chest_xray/test
|
214 |
-
#### Metrics
|
215 |
-
|
216 |
-
Evaluation metrics used are recall and pricision
|
217 |
-
|
218 |
-
### Results
|
219 |
-
|
220 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/652917fd8297110ffe4e04ba/cml6Bd82tv3Rpu0bxBwQq.png)
|
221 |
-
|
222 |
-
#### Summary
|
223 |
-
|
224 |
-
The model is capable of detecting pneumonia with an accuracy of 91%
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
library_name: keras
|
|
|
|
|
|
|
3 |
---
|
|
|
4 |
|
5 |
+
## Model description
|
|
|
6 |
|
7 |
+
More information needed
|
|
|
|
|
8 |
|
9 |
+
## Intended uses & limitations
|
|
|
10 |
|
11 |
+
More information needed
|
|
|
12 |
|
13 |
+
## Training and evaluation data
|
|
|
14 |
|
15 |
+
More information needed
|
|
|
16 |
|
17 |
+
## Training procedure
|
|
|
18 |
|
19 |
+
### Training hyperparameters
|
|
|
20 |
|
21 |
+
The following hyperparameters were used during training:
|
22 |
|
23 |
+
| Hyperparameters | Value |
|
24 |
+
| :-- | :-- |
|
25 |
+
| name | Adam |
|
26 |
+
| learning_rate | 3.5e-05 |
|
27 |
+
| decay | 0.0 |
|
28 |
+
| beta_1 | 0.9 |
|
29 |
+
| beta_2 | 0.999 |
|
30 |
+
| epsilon | 1e-07 |
|
31 |
+
| amsgrad | False |
|
32 |
+
| training_precision | float32 |
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
fingerprint.pb
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4181cfefafab0f0902ce49ec39ccc3438dc7e37e68d2d880a40427b6f2ec49c0
|
3 |
+
size 55
|
keras_metadata.pb
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e5eb72900505f30e126a52ea89aa8fddcaba4a67554d8da509355e9b645525ce
|
3 |
+
size 602753
|
saved_model.pb
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e538051bdc1a8fad93484ed07090934a15484f2aa6072d6e3f2272cbfc00ee93
|
3 |
+
size 2872369
|
variables/variables.data-00000-of-00001
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa7290d849ec9fd91784689e70d085a2868a378f2bbade44939d3f44806204bd
|
3 |
+
size 113435138
|
variables/variables.index
CHANGED
Binary files a/variables/variables.index and b/variables/variables.index differ
|
|