text stringlengths 0 4.99k |
|---|
return train_dataset, test_dataset |
Compile, train, and evaluate the model |
hidden_units = [8, 8] |
learning_rate = 0.001 |
def run_experiment(model, loss, train_dataset, test_dataset): |
model.compile( |
optimizer=keras.optimizers.RMSprop(learning_rate=learning_rate), |
loss=loss, |
metrics=[keras.metrics.RootMeanSquaredError()], |
) |
print(\"Start training the model...\") |
model.fit(train_dataset, epochs=num_epochs, validation_data=test_dataset) |
print(\"Model training finished.\") |
_, rmse = model.evaluate(train_dataset, verbose=0) |
print(f\"Train RMSE: {round(rmse, 3)}\") |
print(\"Evaluating model performance...\") |
_, rmse = model.evaluate(test_dataset, verbose=0) |
print(f\"Test RMSE: {round(rmse, 3)}\") |
Create model inputs |
FEATURE_NAMES = [ |
\"fixed acidity\", |
\"volatile acidity\", |
\"citric acid\", |
\"residual sugar\", |
\"chlorides\", |
\"free sulfur dioxide\", |
\"total sulfur dioxide\", |
\"density\", |
\"pH\", |
\"sulphates\", |
\"alcohol\", |
] |
def create_model_inputs(): |
inputs = {} |
for feature_name in FEATURE_NAMES: |
inputs[feature_name] = layers.Input( |
name=feature_name, shape=(1,), dtype=tf.float32 |
) |
return inputs |
Experiment 1: standard neural network |
We create a standard deterministic neural network model as a baseline. |
def create_baseline_model(): |
inputs = create_model_inputs() |
input_values = [value for _, value in sorted(inputs.items())] |
features = keras.layers.concatenate(input_values) |
features = layers.BatchNormalization()(features) |
# Create hidden layers with deterministic weights using the Dense layer. |
for units in hidden_units: |
features = layers.Dense(units, activation=\"sigmoid\")(features) |
# The output is deterministic: a single point estimate. |
outputs = layers.Dense(units=1)(features) |
model = keras.Model(inputs=inputs, outputs=outputs) |
return model |
Let's split the wine dataset into training and test sets, with 85% and 15% of the examples, respectively. |
dataset_size = 4898 |
batch_size = 256 |
train_size = int(dataset_size * 0.85) |
train_dataset, test_dataset = get_train_and_test_splits(train_size, batch_size) |
Now let's train the baseline model. We use the MeanSquaredError as the loss function. |
num_epochs = 100 |
mse_loss = keras.losses.MeanSquaredError() |
baseline_model = create_baseline_model() |
run_experiment(baseline_model, mse_loss, train_dataset, test_dataset) |
Start training the model... |
Epoch 1/100 |
17/17 [==============================] - 1s 53ms/step - loss: 37.5710 - root_mean_squared_error: 6.1294 - val_loss: 35.6750 - val_root_mean_squared_error: 5.9729 |
Epoch 2/100 |
17/17 [==============================] - 0s 7ms/step - loss: 35.5154 - root_mean_squared_error: 5.9594 - val_loss: 34.2430 - val_root_mean_squared_error: 5.8518 |
Epoch 3/100 |
17/17 [==============================] - 0s 7ms/step - loss: 33.9975 - root_mean_squared_error: 5.8307 - val_loss: 32.8003 - val_root_mean_squared_error: 5.7272 |
Epoch 4/100 |
17/17 [==============================] - 0s 12ms/step - loss: 32.5928 - root_mean_squared_error: 5.7090 - val_loss: 31.3385 - val_root_mean_squared_error: 5.5981 |
Epoch 5/100 |
17/17 [==============================] - 0s 7ms/step - loss: 30.8914 - root_mean_squared_error: 5.5580 - val_loss: 29.8659 - val_root_mean_squared_error: 5.4650 |
... |
Epoch 95/100 |
17/17 [==============================] - 0s 6ms/step - loss: 0.6927 - root_mean_squared_error: 0.8322 - val_loss: 0.6901 - val_root_mean_squared_error: 0.8307 |
Epoch 96/100 |
17/17 [==============================] - 0s 6ms/step - loss: 0.6929 - root_mean_squared_error: 0.8323 - val_loss: 0.6866 - val_root_mean_squared_error: 0.8286 |
Epoch 97/100 |
17/17 [==============================] - 0s 6ms/step - loss: 0.6582 - root_mean_squared_error: 0.8112 - val_loss: 0.6797 - val_root_mean_squared_error: 0.8244 |
Epoch 98/100 |
17/17 [==============================] - 0s 6ms/step - loss: 0.6733 - root_mean_squared_error: 0.8205 - val_loss: 0.6740 - val_root_mean_squared_error: 0.8210 |
Epoch 99/100 |
17/17 [==============================] - 0s 7ms/step - loss: 0.6623 - root_mean_squared_error: 0.8138 - val_loss: 0.6713 - val_root_mean_squared_error: 0.8193 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.