text
stringlengths
0
4.99k
Epoch 97/200
45/45 [==============================] - 23s 518ms/step - loss: 0.1951 - sparse_categorical_accuracy: 0.9264 - val_loss: 0.3281 - val_sparse_categorical_accuracy: 0.8682
Epoch 98/200
45/45 [==============================] - 23s 516ms/step - loss: 0.1899 - sparse_categorical_accuracy: 0.9354 - val_loss: 0.3307 - val_sparse_categorical_accuracy: 0.8696
Epoch 99/200
45/45 [==============================] - 23s 519ms/step - loss: 0.1901 - sparse_categorical_accuracy: 0.9250 - val_loss: 0.3307 - val_sparse_categorical_accuracy: 0.8710
Epoch 100/200
45/45 [==============================] - 23s 516ms/step - loss: 0.1902 - sparse_categorical_accuracy: 0.9319 - val_loss: 0.3259 - val_sparse_categorical_accuracy: 0.8696
Epoch 101/200
45/45 [==============================] - 23s 518ms/step - loss: 0.1868 - sparse_categorical_accuracy: 0.9358 - val_loss: 0.3262 - val_sparse_categorical_accuracy: 0.8724
Epoch 102/200
45/45 [==============================] - 23s 518ms/step - loss: 0.1779 - sparse_categorical_accuracy: 0.9431 - val_loss: 0.3250 - val_sparse_categorical_accuracy: 0.8710
Epoch 103/200
45/45 [==============================] - 23s 520ms/step - loss: 0.1870 - sparse_categorical_accuracy: 0.9351 - val_loss: 0.3260 - val_sparse_categorical_accuracy: 0.8724
Epoch 104/200
45/45 [==============================] - 23s 521ms/step - loss: 0.1826 - sparse_categorical_accuracy: 0.9344 - val_loss: 0.3232 - val_sparse_categorical_accuracy: 0.8766
Epoch 105/200
45/45 [==============================] - 23s 519ms/step - loss: 0.1731 - sparse_categorical_accuracy: 0.9399 - val_loss: 0.3245 - val_sparse_categorical_accuracy: 0.8724
Epoch 106/200
45/45 [==============================] - 23s 518ms/step - loss: 0.1766 - sparse_categorical_accuracy: 0.9361 - val_loss: 0.3254 - val_sparse_categorical_accuracy: 0.8682
Epoch 107/200
Conclusions
In about 110-120 epochs (25s each on Colab), the model reaches a training accuracy of ~0.95, validation accuracy of ~84 and a testing accuracy of ~85, without hyperparameter tuning. And that is for a model with less than 100k parameters. Of course, parameter count and accuracy could be improved by a hyperparameter search and a more sophisticated learning rate schedule, or a different optimizer.
This notebook demonstrates how to do timeseries forecasting using a LSTM model.
Setup
This example requires TensorFlow 2.3 or higher.
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
Climate Data Time-Series
We will be using Jena Climate dataset recorded by the Max Planck Institute for Biogeochemistry. The dataset consists of 14 features such as temperature, pressure, humidity etc, recorded once per 10 minutes.
Location: Weather Station, Max Planck Institute for Biogeochemistry in Jena, Germany
Time-frame Considered: Jan 10, 2009 - December 31, 2016
The table below shows the column names, their value formats, and their description.
Index Features Format Description
1 Date Time 01.01.2009 00:10:00 Date-time reference
2 p (mbar) 996.52 The pascal SI derived unit of pressure used to quantify internal pressure. Meteorological reports typically state atmospheric pressure in millibars.
3 T (degC) -8.02 Temperature in Celsius
4 Tpot (K) 265.4 Temperature in Kelvin
5 Tdew (degC) -8.9 Temperature in Celsius relative to humidity. Dew Point is a measure of the absolute amount of water in the air, the DP is the temperature at which the air cannot hold all the moisture in it and water condenses.
6 rh (%) 93.3 Relative Humidity is a measure of how saturated the air is with water vapor, the %RH determines the amount of water contained within collection objects.
7 VPmax (mbar) 3.33 Saturation vapor pressure
8 VPact (mbar) 3.11 Vapor pressure
9 VPdef (mbar) 0.22 Vapor pressure deficit
10 sh (g/kg) 1.94 Specific humidity
11 H2OC (mmol/mol) 3.12 Water vapor concentration
12 rho (g/m ** 3) 1307.75 Airtight
13 wv (m/s) 1.03 Wind speed
14 max. wv (m/s) 1.75 Maximum wind speed
15 wd (deg) 152.3 Wind direction in degrees
from zipfile import ZipFile
import os
uri = \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip\"
zip_path = keras.utils.get_file(origin=uri, fname=\"jena_climate_2009_2016.csv.zip\")
zip_file = ZipFile(zip_path)
zip_file.extractall()
csv_path = \"jena_climate_2009_2016.csv\"
df = pd.read_csv(csv_path)
Raw Data Visualization
To give us a sense of the data we are working with, each feature has been plotted below. This shows the distinct pattern of each feature over the time period from 2009 to 2016. It also shows where anomalies are present, which will be addressed during normalization.
titles = [
\"Pressure\",
\"Temperature\",
\"Temperature in Kelvin\",
\"Temperature (dew point)\",
\"Relative Humidity\",
\"Saturation vapor pressure\",
\"Vapor pressure\",
\"Vapor pressure deficit\",
\"Specific humidity\",
\"Water vapor concentration\",
\"Airtight\",
\"Wind speed\",
\"Maximum wind speed\",
\"Wind direction in degrees\",
]
feature_keys = [
\"p (mbar)\",
\"T (degC)\",
\"Tpot (K)\",
\"Tdew (degC)\",
\"rh (%)\",
\"VPmax (mbar)\",
\"VPact (mbar)\",
\"VPdef (mbar)\",
\"sh (g/kg)\",
\"H2OC (mmol/mol)\",
\"rho (g/m**3)\",