text stringlengths 0 4.99k |
|---|
Epoch 42/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5024 - sparse_categorical_accuracy: 0.7857 |
Epoch 43/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5020 - sparse_categorical_accuracy: 0.7857 |
Epoch 44/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5009 - sparse_categorical_accuracy: 0.7865 |
Epoch 45/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.4998 - sparse_categorical_accuracy: 0.7868 |
Epoch 46/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5000 - sparse_categorical_accuracy: 0.7864 |
Epoch 47/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.4985 - sparse_categorical_accuracy: 0.7876 |
Epoch 48/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.4985 - sparse_categorical_accuracy: 0.7877 |
Epoch 49/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.4979 - sparse_categorical_accuracy: 0.7876 |
Epoch 50/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.4973 - sparse_categorical_accuracy: 0.7881 |
Model training finished |
Test accuracy: 80.69% |
The wide and deep model achieves ~79% test accuracy. |
Experiment 3: Deep & Cross model |
In the third experiment, we create a Deep & Cross model. The deep part of this model is the same as the deep part created in the previous experiment. The key idea of the cross part is to apply explicit feature crossing in an efficient way, where the degree of cross features grows with layer depth. |
def create_deep_and_cross_model(): |
inputs = create_model_inputs() |
x0 = encode_inputs(inputs, use_embedding=True) |
cross = x0 |
for _ in hidden_units: |
units = cross.shape[-1] |
x = layers.Dense(units)(cross) |
cross = x0 * x + cross |
cross = layers.BatchNormalization()(cross) |
deep = x0 |
for units in hidden_units: |
deep = layers.Dense(units)(deep) |
deep = layers.BatchNormalization()(deep) |
deep = layers.ReLU()(deep) |
deep = layers.Dropout(dropout_rate)(deep) |
merged = layers.concatenate([cross, deep]) |
outputs = layers.Dense(units=NUM_CLASSES, activation=\"softmax\")(merged) |
model = keras.Model(inputs=inputs, outputs=outputs) |
return model |
deep_and_cross_model = create_deep_and_cross_model() |
keras.utils.plot_model(deep_and_cross_model, show_shapes=True, rankdir=\"LR\") |
('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.') |
Let's run it: |
run_experiment(deep_and_cross_model) |
Start training the model... |
Epoch 1/50 |
1862/1862 [==============================] - 11s 5ms/step - loss: 0.8585 - sparse_categorical_accuracy: 0.6547 |
Epoch 2/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5968 - sparse_categorical_accuracy: 0.7424 |
Epoch 3/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5729 - sparse_categorical_accuracy: 0.7520 |
Epoch 4/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5610 - sparse_categorical_accuracy: 0.7583 |
Epoch 5/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5511 - sparse_categorical_accuracy: 0.7623 |
Epoch 6/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5460 - sparse_categorical_accuracy: 0.7651 |
Epoch 7/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5408 - sparse_categorical_accuracy: 0.7671 |
Epoch 8/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5374 - sparse_categorical_accuracy: 0.7695 |
Epoch 9/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5344 - sparse_categorical_accuracy: 0.7704 |
Epoch 10/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5310 - sparse_categorical_accuracy: 0.7715 |
Epoch 11/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5286 - sparse_categorical_accuracy: 0.7725 |
Epoch 12/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5254 - sparse_categorical_accuracy: 0.7737 |
Epoch 13/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5249 - sparse_categorical_accuracy: 0.7737 |
Epoch 14/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5223 - sparse_categorical_accuracy: 0.7752 |
Epoch 15/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5206 - sparse_categorical_accuracy: 0.7759 |
Epoch 16/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5187 - sparse_categorical_accuracy: 0.7765 |
Epoch 17/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5179 - sparse_categorical_accuracy: 0.7772 |
Epoch 18/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5152 - sparse_categorical_accuracy: 0.7788 |
Epoch 19/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5145 - sparse_categorical_accuracy: 0.7785 |
Epoch 20/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5128 - sparse_categorical_accuracy: 0.7800 |
Epoch 21/50 |
1862/1862 [==============================] - 5s 3ms/step - loss: 0.5117 - sparse_categorical_accuracy: 0.7803 |
Epoch 22/50 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.