text stringlengths 0 4.99k |
|---|
Predictions mean: 5.21, min: 4.85, max: 5.68, range: 0.83 - Actual: 5.0 |
Predictions mean: 6.53, min: 6.35, max: 6.64, range: 0.28 - Actual: 6.0 |
Predictions mean: 6.3, min: 6.05, max: 6.47, range: 0.42 - Actual: 6.0 |
Predictions mean: 6.44, min: 6.19, max: 6.59, range: 0.4 - Actual: 7.0 |
Notice that the model trained with the full training dataset shows smaller range (uncertainty) in the prediction values for the same inputs, compared to the model trained with a subset of the training dataset. |
Experiment 3: probabilistic Bayesian neural network |
So far, the output of the standard and the Bayesian NN models that we built is deterministic, that is, produces a point estimate as a prediction for a given example. We can create a probabilistic NN by letting the model output a distribution. In this case, the model captures the aleatoric uncertainty as well, which is due to irreducible noise in the data, or to the stochastic nature of the process generating the data. |
In this example, we model the output as a IndependentNormal distribution, with learnable mean and variance parameters. If the task was classification, we would have used IndependentBernoulli with binary classes, and OneHotCategorical with multiple classes, to model distribution of the model output. |
def create_probablistic_bnn_model(train_size): |
inputs = create_model_inputs() |
features = keras.layers.concatenate(list(inputs.values())) |
features = layers.BatchNormalization()(features) |
# Create hidden layers with weight uncertainty using the DenseVariational layer. |
for units in hidden_units: |
features = tfp.layers.DenseVariational( |
units=units, |
make_prior_fn=prior, |
make_posterior_fn=posterior, |
kl_weight=1 / train_size, |
activation=\"sigmoid\", |
)(features) |
# Create a probabilisticå output (Normal distribution), and use the `Dense` layer |
# to produce the parameters of the distribution. |
# We set units=2 to learn both the mean and the variance of the Normal distribution. |
distribution_params = layers.Dense(units=2)(features) |
outputs = tfp.layers.IndependentNormal(1)(distribution_params) |
model = keras.Model(inputs=inputs, outputs=outputs) |
return model |
Since the output of the model is a distribution, rather than a point estimate, we use the negative loglikelihood as our loss function to compute how likely to see the true data (targets) from the estimated distribution produced by the model. |
def negative_loglikelihood(targets, estimated_distribution): |
return -estimated_distribution.log_prob(targets) |
num_epochs = 1000 |
prob_bnn_model = create_probablistic_bnn_model(train_size) |
run_experiment(prob_bnn_model, negative_loglikelihood, train_dataset, test_dataset) |
Start training the model... |
Epoch 1/1000 |
17/17 [==============================] - 2s 36ms/step - loss: 11.2378 - root_mean_squared_error: 6.6758 - val_loss: 8.5554 - val_root_mean_squared_error: 6.6240 |
Epoch 2/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 11.8285 - root_mean_squared_error: 6.5718 - val_loss: 8.2138 - val_root_mean_squared_error: 6.5256 |
Epoch 3/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 8.8566 - root_mean_squared_error: 6.5369 - val_loss: 5.8749 - val_root_mean_squared_error: 6.3394 |
Epoch 4/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 7.8191 - root_mean_squared_error: 6.3981 - val_loss: 7.6224 - val_root_mean_squared_error: 6.4473 |
Epoch 5/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 6.2598 - root_mean_squared_error: 6.4613 - val_loss: 5.9415 - val_root_mean_squared_error: 6.3466 |
... |
Epoch 995/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 1.1323 - root_mean_squared_error: 1.0431 - val_loss: 1.1553 - val_root_mean_squared_error: 1.1060 |
Epoch 996/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 1.1613 - root_mean_squared_error: 1.0686 - val_loss: 1.1554 - val_root_mean_squared_error: 1.0370 |
Epoch 997/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 1.1351 - root_mean_squared_error: 1.0628 - val_loss: 1.1472 - val_root_mean_squared_error: 1.0813 |
Epoch 998/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 1.1324 - root_mean_squared_error: 1.0858 - val_loss: 1.1527 - val_root_mean_squared_error: 1.0578 |
Epoch 999/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 1.1591 - root_mean_squared_error: 1.0801 - val_loss: 1.1483 - val_root_mean_squared_error: 1.0442 |
Epoch 1000/1000 |
17/17 [==============================] - 0s 7ms/step - loss: 1.1402 - root_mean_squared_error: 1.0554 - val_loss: 1.1495 - val_root_mean_squared_error: 1.0389 |
Model training finished. |
Train RMSE: 1.068 |
Evaluating model performance... |
Test RMSE: 1.068 |
Now let's produce an output from the model given the test examples. The output is now a distribution, and we can use its mean and variance to compute the confidence intervals (CI) of the prediction. |
prediction_distribution = prob_bnn_model(examples) |
prediction_mean = prediction_distribution.mean().numpy().tolist() |
prediction_stdv = prediction_distribution.stddev().numpy() |
# The 95% CI is computed as mean ± (1.96 * stdv) |
upper = (prediction_mean + (1.96 * prediction_stdv)).tolist() |
lower = (prediction_mean - (1.96 * prediction_stdv)).tolist() |
prediction_stdv = prediction_stdv.tolist() |
for idx in range(sample): |
print( |
f\"Prediction mean: {round(prediction_mean[idx][0], 2)}, \" |
f\"stddev: {round(prediction_stdv[idx][0], 2)}, \" |
f\"95% CI: [{round(upper[idx][0], 2)} - {round(lower[idx][0], 2)}]\" |
f\" - Actual: {targets[idx]}\" |
) |
Prediction mean: 5.29, stddev: 0.66, 95% CI: [6.58 - 4.0] - Actual: 6.0 |
Prediction mean: 6.49, stddev: 0.81, 95% CI: [8.08 - 4.89] - Actual: 6.0 |
Prediction mean: 5.85, stddev: 0.7, 95% CI: [7.22 - 4.48] - Actual: 7.0 |
Prediction mean: 5.59, stddev: 0.69, 95% CI: [6.95 - 4.24] - Actual: 5.0 |
Prediction mean: 6.37, stddev: 0.87, 95% CI: [8.07 - 4.67] - Actual: 5.0 |
Prediction mean: 6.34, stddev: 0.78, 95% CI: [7.87 - 4.81] - Actual: 7.0 |
Prediction mean: 5.14, stddev: 0.65, 95% CI: [6.4 - 3.87] - Actual: 5.0 |
Prediction mean: 6.49, stddev: 0.81, 95% CI: [8.09 - 4.89] - Actual: 6.0 |
Prediction mean: 6.25, stddev: 0.77, 95% CI: [7.76 - 4.74] - Actual: 6.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.