text stringlengths 0 4.99k |
|---|
Epoch 100/100 |
17/17 [==============================] - 0s 6ms/step - loss: 0.6522 - root_mean_squared_error: 0.8075 - val_loss: 0.6666 - val_root_mean_squared_error: 0.8165 |
Model training finished. |
Train RMSE: 0.809 |
Evaluating model performance... |
Test RMSE: 0.816 |
We take a sample from the test set use the model to obtain predictions for them. Note that since the baseline model is deterministic, we get a single a point estimate prediction for each test example, with no information about the uncertainty of the model nor the prediction. |
sample = 10 |
examples, targets = list(test_dataset.unbatch().shuffle(batch_size * 10).batch(sample))[ |
0 |
] |
predicted = baseline_model(examples).numpy() |
for idx in range(sample): |
print(f\"Predicted: {round(float(predicted[idx][0]), 1)} - Actual: {targets[idx]}\") |
Predicted: 6.0 - Actual: 6.0 |
Predicted: 6.2 - Actual: 6.0 |
Predicted: 5.8 - Actual: 7.0 |
Predicted: 6.0 - Actual: 5.0 |
Predicted: 5.7 - Actual: 5.0 |
Predicted: 6.2 - Actual: 7.0 |
Predicted: 5.6 - Actual: 5.0 |
Predicted: 6.2 - Actual: 6.0 |
Predicted: 6.2 - Actual: 6.0 |
Predicted: 6.2 - Actual: 7.0 |
Experiment 2: Bayesian neural network (BNN) |
The object of the Bayesian approach for modeling neural networks is to capture the epistemic uncertainty, which is uncertainty about the model fitness, due to limited training data. |
The idea is that, instead of learning specific weight (and bias) values in the neural network, the Bayesian approach learns weight distributions - from which we can sample to produce an output for a given input - to encode weight uncertainty. |
Thus, we need to define prior and the posterior distributions of these weights, and the training process is to learn the parameters of these distributions. |
# Define the prior weight distribution as Normal of mean=0 and stddev=1. |
# Note that, in this example, the we prior distribution is not trainable, |
# as we fix its parameters. |
def prior(kernel_size, bias_size, dtype=None): |
n = kernel_size + bias_size |
prior_model = keras.Sequential( |
[ |
tfp.layers.DistributionLambda( |
lambda t: tfp.distributions.MultivariateNormalDiag( |
loc=tf.zeros(n), scale_diag=tf.ones(n) |
) |
) |
] |
) |
return prior_model |
# Define variational posterior weight distribution as multivariate Gaussian. |
# Note that the learnable parameters for this distribution are the means, |
# variances, and covariances. |
def posterior(kernel_size, bias_size, dtype=None): |
n = kernel_size + bias_size |
posterior_model = keras.Sequential( |
[ |
tfp.layers.VariableLayer( |
tfp.layers.MultivariateNormalTriL.params_size(n), dtype=dtype |
), |
tfp.layers.MultivariateNormalTriL(n), |
] |
) |
return posterior_model |
We use the tfp.layers.DenseVariational layer instead of the standard keras.layers.Dense layer in the neural network model. |
def create_bnn_model(train_size): |
inputs = create_model_inputs() |
features = keras.layers.concatenate(list(inputs.values())) |
features = layers.BatchNormalization()(features) |
# Create hidden layers with weight uncertainty using the DenseVariational layer. |
for units in hidden_units: |
features = tfp.layers.DenseVariational( |
units=units, |
make_prior_fn=prior, |
make_posterior_fn=posterior, |
kl_weight=1 / train_size, |
activation=\"sigmoid\", |
)(features) |
# The output is deterministic: a single point estimate. |
outputs = layers.Dense(units=1)(features) |
model = keras.Model(inputs=inputs, outputs=outputs) |
return model |
The epistemic uncertainty can be reduced as we increase the size of the training data. That is, the more data the BNN model sees, the more it is certain about its estimates for the weights (distribution parameters). Let's test this behaviour by training the BNN model on a small subset of the training set, and then on the full training set, to compare the output variances. |
Train BNN with a small training subset. |
num_epochs = 500 |
train_sample_size = int(train_size * 0.3) |
small_train_dataset = train_dataset.unbatch().take(train_sample_size).batch(batch_size) |
bnn_model_small = create_bnn_model(train_sample_size) |
run_experiment(bnn_model_small, mse_loss, small_train_dataset, test_dataset) |
Start training the model... |
Epoch 1/500 |
5/5 [==============================] - 2s 123ms/step - loss: 34.5497 - root_mean_squared_error: 5.8764 - val_loss: 37.1164 - val_root_mean_squared_error: 6.0910 |
Epoch 2/500 |
5/5 [==============================] - 0s 28ms/step - loss: 36.0738 - root_mean_squared_error: 6.0007 - val_loss: 31.7373 - val_root_mean_squared_error: 5.6322 |
Epoch 3/500 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.