text stringlengths 0 4.99k |
|---|
def show_batch(image_batch, label_batch): |
plt.figure(figsize=(10, 10)) |
for n in range(25): |
ax = plt.subplot(5, 5, n + 1) |
plt.imshow(image_batch[n] / 255.0) |
if label_batch[n]: |
plt.title(\"MALIGNANT\") |
else: |
plt.title(\"BENIGN\") |
plt.axis(\"off\") |
show_batch(image_batch.numpy(), label_batch.numpy()) |
png |
Building our model |
Define callbacks |
The following function allows for the model to change the learning rate as it runs each epoch. |
We can use callbacks to stop training when there are no improvements in the model. At the end of the training process, the model will restore the weights of its best iteration. |
initial_learning_rate = 0.01 |
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( |
initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True |
) |
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint( |
\"melanoma_model.h5\", save_best_only=True |
) |
early_stopping_cb = tf.keras.callbacks.EarlyStopping( |
patience=10, restore_best_weights=True |
) |
Build our base model |
Transfer learning is a great way to reap the benefits of a well-trained model without having the train the model ourselves. For this notebook, we want to import the Xception model. A more in-depth analysis of transfer learning can be found here. |
We do not want our metric to be accuracy because our data is imbalanced. For our example, we will be looking at the area under a ROC curve. |
def make_model(): |
base_model = tf.keras.applications.Xception( |
input_shape=(*IMAGE_SIZE, 3), include_top=False, weights=\"imagenet\" |
) |
base_model.trainable = False |
inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3]) |
x = tf.keras.applications.xception.preprocess_input(inputs) |
x = base_model(x) |
x = tf.keras.layers.GlobalAveragePooling2D()(x) |
x = tf.keras.layers.Dense(8, activation=\"relu\")(x) |
x = tf.keras.layers.Dropout(0.7)(x) |
outputs = tf.keras.layers.Dense(1, activation=\"sigmoid\")(x) |
model = tf.keras.Model(inputs=inputs, outputs=outputs) |
model.compile( |
optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule), |
loss=\"binary_crossentropy\", |
metrics=tf.keras.metrics.AUC(name=\"auc\"), |
) |
return model |
Train the model |
with strategy.scope(): |
model = make_model() |
history = model.fit( |
train_dataset, |
epochs=2, |
validation_data=valid_dataset, |
callbacks=[checkpoint_cb, early_stopping_cb], |
) |
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/xception/xception_weights_tf_dim_ordering_tf_kernels_notop.h5 |
83689472/83683744 [==============================] - 3s 0us/step |
Epoch 1/2 |
454/454 [==============================] - 525s 1s/step - loss: 0.1895 - auc: 0.5841 - val_loss: 0.0825 - val_auc: 0.8109 |
Epoch 2/2 |
454/454 [==============================] - 118s 260ms/step - loss: 0.1063 - auc: 0.5994 - val_loss: 0.0861 - val_auc: 0.8336 |
Predict results |
We'll use our model to predict results for our test dataset images. Values closer to 0 are more likely to be benign and values closer to 1 are more likely to be malignant. |
def show_batch_predictions(image_batch): |
plt.figure(figsize=(10, 10)) |
for n in range(25): |
ax = plt.subplot(5, 5, n + 1) |
plt.imshow(image_batch[n] / 255.0) |
img_array = tf.expand_dims(image_batch[n], axis=0) |
plt.title(model.predict(img_array)[0]) |
plt.axis(\"off\") |
image_batch = next(iter(test_dataset)) |
show_batch_predictions(image_batch) |
png |
Four simple tips to help you debug your Keras code. |
Introduction |
It's generally possible to do almost anything in Keras without writing code per se: whether you're implementing a new type of GAN or the latest convnet architecture for image segmentation, you can usually stick to calling built-in methods. Because all built-in methods do extensive input validation checks, you will have... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.