Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# PPO-LSTM Model
|
| 2 |
-
This model was trained using
|
| 3 |
|
| 4 |
**Training Data**: Custom sequence dataset
|
| 5 |
-
**Algorithm**: Proximal Policy Optimization (PPO) with LSTM
|
| 6 |
**Library**: Stable-Baselines3
|
|
|
|
| 1 |
# PPO-LSTM Model
|
| 2 |
+
This model was trained using a custom multi-layer LSTM with PPO.
|
| 3 |
|
| 4 |
**Training Data**: Custom sequence dataset
|
| 5 |
+
**Algorithm**: Proximal Policy Optimization (PPO) with a custom LSTM
|
| 6 |
**Library**: Stable-Baselines3
|