text
stringlengths
0
4.99k
Conclusion
Many ML practitioners and researchers rely on metrics that may not yet have a TensorFlow implementation. Keras users can still leverage the wide variety of existing metric implementations in other frameworks by using a Keras callback. These metrics can be exported, viewed and analyzed in the TensorBoard like any other ...
Loading TFRecords for computer vision models.
Introduction + Set Up
TFRecords store a sequence of binary records, read linearly. They are useful format for storing data because they can be read efficiently. Learn more about TFRecords here.
We'll explore how we can easily load in TFRecords for our melanoma classifier.
import tensorflow as tf
from functools import partial
import matplotlib.pyplot as plt
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
print(\"Device:\", tpu.master())
strategy = tf.distribute.TPUStrategy(tpu)
except:
strategy = tf.distribute.get_strategy()
print(\"Number of replicas:\", strategy.num_replicas_in_sync)
Number of replicas: 8
We want a bigger batch size as our data is not balanced.
AUTOTUNE = tf.data.AUTOTUNE
GCS_PATH = \"gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e\"
BATCH_SIZE = 64
IMAGE_SIZE = [1024, 1024]
Load the data
FILENAMES = tf.io.gfile.glob(GCS_PATH + \"/tfrecords/train*.tfrec\")
split_ind = int(0.9 * len(FILENAMES))
TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:]
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + \"/tfrecords/test*.tfrec\")
print(\"Train TFRecord Files:\", len(TRAINING_FILENAMES))
print(\"Validation TFRecord Files:\", len(VALID_FILENAMES))
print(\"Test TFRecord Files:\", len(TEST_FILENAMES))
Train TFRecord Files: 14
Validation TFRecord Files: 2
Test TFRecord Files: 16
Decoding the data
The images have to be converted to tensors so that it will be a valid input in our model. As images utilize an RBG scale, we specify 3 channels.
We also reshape our data so that all of the images will be the same shape.
def decode_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [*IMAGE_SIZE, 3])
return image
As we load in our data, we need both our X and our Y. The X is our image; the model will find features and patterns in our image dataset. We want to predict Y, the probability that the lesion in the image is malignant. We will to through our TFRecords and parse out the image and the target values.
def read_tfrecord(example, labeled):
tfrecord_format = (
{
\"image\": tf.io.FixedLenFeature([], tf.string),
\"target\": tf.io.FixedLenFeature([], tf.int64),
}
if labeled
else {\"image\": tf.io.FixedLenFeature([], tf.string),}
)
example = tf.io.parse_single_example(example, tfrecord_format)
image = decode_image(example[\"image\"])
if labeled:
label = tf.cast(example[\"target\"], tf.int32)
return image, label
return image
Define loading methods
Our dataset is not ordered in any meaningful way, so the order can be ignored when loading our dataset. By ignoring the order and reading files as soon as they come in, it will take a shorter time to load the data.
def load_dataset(filenames, labeled=True):
ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(
filenames
) # automatically interleaves reads from multiple files
dataset = dataset.with_options(
ignore_order
) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(
partial(read_tfrecord, labeled=labeled), num_parallel_calls=AUTOTUNE
)
# returns a dataset of (image, label) pairs if labeled=True or just images if labeled=False
return dataset
We define the following function to get our different datasets.
def get_dataset(filenames, labeled=True):
dataset = load_dataset(filenames, labeled=labeled)
dataset = dataset.shuffle(2048)
dataset = dataset.prefetch(buffer_size=AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE)
return dataset
Visualize input images
train_dataset = get_dataset(TRAINING_FILENAMES)
valid_dataset = get_dataset(VALID_FILENAMES)
test_dataset = get_dataset(TEST_FILENAMES, labeled=False)
image_batch, label_batch = next(iter(train_dataset))