Spaces:
Configuration error
Configuration error
Antigravity Agent
commited on
Commit
·
ae22fc1
1
Parent(s):
c98f107
Deploy Neuro-Flyt 3D Training
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitignore +58 -0
- Dockerfile +22 -0
- README.md +34 -10
- README_HF.md +79 -0
- debug_imports.py +31 -0
- demo/__init__.py +2 -0
- demo/visualize_drone.py +425 -0
- demo_3d.py +72 -0
- demo_interactive.py +124 -0
- demo_log.txt +43 -0
- demo_log_2.txt +43 -0
- demo_log_3.txt +87 -0
- demo_log_4.txt +87 -0
- demo_log_5.txt +9 -0
- deploy_log.txt +254 -0
- deploy_log_2.txt +30 -0
- env/__init__.py +2 -0
- env/drone_3d.py +143 -0
- env/drone_env.py +286 -0
- eval/__init__.py +2 -0
- eval/eval_liquid_policy.py +177 -0
- eval/eval_mlp_baseline.py +176 -0
- install_log.txt +197 -0
- interactive_log.txt +8 -0
- main.py +144 -0
- models/__init__.py +2 -0
- models/liquid_cell.py +86 -0
- models/liquid_policy.py +81 -0
- models/liquid_ppo.py +100 -0
- output.txt +6 -0
- pip_log.txt +93 -0
- pybullet_bin_log.txt +3 -0
- pybullet_log.txt +0 -0
- pygame_log.txt +3 -0
- requirements.txt +13 -0
- run_demo.sh +9 -0
- setup.sh +45 -0
- test_log.txt +14 -0
- test_physics.py +43 -0
- test_random_1.txt +8 -0
- test_random_2.txt +8 -0
- test_result_final.txt +7 -0
- test_success.txt +7 -0
- train.py +35 -0
- train/__init__.py +2 -0
- train/train_liquid_ppo.py +209 -0
- train/train_mlp_ppo.py +159 -0
- train_hf.py +68 -0
- train_log_500k.txt +1974 -0
- train_log_full.txt +64 -0
.gitignore
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
*.so
|
| 6 |
+
.Python
|
| 7 |
+
build/
|
| 8 |
+
develop-eggs/
|
| 9 |
+
dist/
|
| 10 |
+
downloads/
|
| 11 |
+
eggs/
|
| 12 |
+
.eggs/
|
| 13 |
+
lib/
|
| 14 |
+
lib64/
|
| 15 |
+
parts/
|
| 16 |
+
sdist/
|
| 17 |
+
var/
|
| 18 |
+
wheels/
|
| 19 |
+
*.egg-info/
|
| 20 |
+
.installed.cfg
|
| 21 |
+
*.egg
|
| 22 |
+
|
| 23 |
+
# Virtual Environment
|
| 24 |
+
venv/
|
| 25 |
+
ENV/
|
| 26 |
+
.venv
|
| 27 |
+
# Note: env/ is our project folder, not a virtual environment
|
| 28 |
+
|
| 29 |
+
# IDE
|
| 30 |
+
.vscode/
|
| 31 |
+
.idea/
|
| 32 |
+
*.swp
|
| 33 |
+
*.swo
|
| 34 |
+
*~
|
| 35 |
+
|
| 36 |
+
# OS
|
| 37 |
+
.DS_Store
|
| 38 |
+
Thumbs.db
|
| 39 |
+
|
| 40 |
+
# Jupyter Notebook
|
| 41 |
+
.ipynb_checkpoints
|
| 42 |
+
|
| 43 |
+
# PyTorch
|
| 44 |
+
*.pth
|
| 45 |
+
*.pt
|
| 46 |
+
|
| 47 |
+
# Model checkpoints and logs
|
| 48 |
+
models/*.pkl
|
| 49 |
+
models/*.h5
|
| 50 |
+
models/*.ckpt
|
| 51 |
+
logs/
|
| 52 |
+
runs/
|
| 53 |
+
wandb/
|
| 54 |
+
|
| 55 |
+
# Environment variables
|
| 56 |
+
.env
|
| 57 |
+
.env.local
|
| 58 |
+
|
Dockerfile
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
|
| 2 |
+
|
| 3 |
+
# Install system dependencies for PyBullet/OpenGL
|
| 4 |
+
RUN apt-get update && apt-get install -y \
|
| 5 |
+
libgl1-mesa-glx \
|
| 6 |
+
libglib2.0-0 \
|
| 7 |
+
git \
|
| 8 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 9 |
+
|
| 10 |
+
WORKDIR /app
|
| 11 |
+
|
| 12 |
+
# Copy requirements and install
|
| 13 |
+
COPY requirements.txt .
|
| 14 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
| 15 |
+
RUN pip install huggingface_hub
|
| 16 |
+
|
| 17 |
+
# Copy project files
|
| 18 |
+
COPY . .
|
| 19 |
+
|
| 20 |
+
# Default command (can be overridden in Space settings)
|
| 21 |
+
# Expects HF_TOKEN and REPO_ID env vars to be set in the Space
|
| 22 |
+
CMD ["python", "train_hf.py", "--repo_id", "ylop/neuro-flyt-3d", "--steps", "500000"]
|
README.md
CHANGED
|
@@ -1,10 +1,34 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Project Neuro-Flyt 3D
|
| 2 |
+
|
| 3 |
+
**Goal:** Build a verifiable 3D Drone Control verification demo using Liquid Neural Networks.
|
| 4 |
+
|
| 5 |
+
## Installation
|
| 6 |
+
|
| 7 |
+
1. **Clone the repository** (if you haven't already).
|
| 8 |
+
2. **Install dependencies:**
|
| 9 |
+
```bash
|
| 10 |
+
pip install -r requirements.txt
|
| 11 |
+
```
|
| 12 |
+
*Note: You may need to install `opensimplex` and `ncps` manually if they are not in `requirements.txt` yet.*
|
| 13 |
+
```bash
|
| 14 |
+
pip install opensimplex ncps
|
| 15 |
+
```
|
| 16 |
+
|
| 17 |
+
## Usage
|
| 18 |
+
|
| 19 |
+
### Run the Demo
|
| 20 |
+
To launch the 3D visualization with the "Antigravity" hurricane effect:
|
| 21 |
+
```bash
|
| 22 |
+
./run_demo.sh
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
### Verify Physics
|
| 26 |
+
To verify that the wind field is generating non-zero forces:
|
| 27 |
+
```bash
|
| 28 |
+
python test_physics.py
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
## Project Structure
|
| 32 |
+
- `env/drone_3d.py`: Custom PyFlyt environment with 3D Perlin noise wind field.
|
| 33 |
+
- `models/liquid_ppo.py`: PPO agent with LTC (Liquid Time-Constant) feature extractor.
|
| 34 |
+
- `demo_3d.py`: Main visualization script.
|
README_HF.md
ADDED
|
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Deploying Neuro-Flyt 3D to Hugging Face Spaces
|
| 2 |
+
|
| 3 |
+
This guide explains how to use your organization's GPUs on Hugging Face to train the Neuro-Flyt 3D model.
|
| 4 |
+
|
| 5 |
+
## Prerequisites
|
| 6 |
+
1. A Hugging Face Account.
|
| 7 |
+
2. An Organization with GPU billing enabled (or a personal account with GPU access).
|
| 8 |
+
3. A Write Access Token (Settings -> Access Tokens).
|
| 9 |
+
|
| 10 |
+
## Steps
|
| 11 |
+
|
| 12 |
+
### 1. Create a New Space
|
| 13 |
+
1. Go to [huggingface.co/new-space](https://huggingface.co/new-space).
|
| 14 |
+
2. **Owner:** Select your Organization.
|
| 15 |
+
3. **Space Name:** `neuro-flyt-training` (or similar).
|
| 16 |
+
4. **SDK:** Select **Docker**.
|
| 17 |
+
5. **Space Hardware:** Select a GPU instance (e.g., **T4 small** or **A10G**).
|
| 18 |
+
|
| 19 |
+
### 2. Configure Secrets
|
| 20 |
+
In the Space settings, go to **Settings -> Variables and secrets**.
|
| 21 |
+
Add the following **Secret**:
|
| 22 |
+
- `HF_TOKEN`: Your Write Access Token (starts with `hf_...`).
|
| 23 |
+
|
| 24 |
+
### 3. Deploy Code
|
| 25 |
+
You can deploy by pushing the code to the Space's Git repository.
|
| 26 |
+
|
| 27 |
+
```bash
|
| 28 |
+
# 1. Install git-lfs if needed
|
| 29 |
+
git lfs install
|
| 30 |
+
|
| 31 |
+
# 2. Clone your Space (replace with your actual repo URL)
|
| 32 |
+
git clone https://huggingface.co/spaces/YOUR_ORG/neuro-flyt-training
|
| 33 |
+
cd neuro-flyt-training
|
| 34 |
+
|
| 35 |
+
# 3. Copy project files
|
| 36 |
+
cp -r /path/to/Drone-go-brrrrr/* .
|
| 37 |
+
|
| 38 |
+
# 4. Push to Space
|
| 39 |
+
git add .
|
| 40 |
+
git commit -m "Deploy training job"
|
| 41 |
+
git push
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
### 4. Monitor Training
|
| 45 |
+
- Go to the **App** tab in your Space.
|
| 46 |
+
- You will see the training logs in real-time.
|
| 47 |
+
- The training will run for 500,000 steps.
|
| 48 |
+
|
| 49 |
+
### 5. Access Trained Model
|
| 50 |
+
- Once finished, the script will automatically push the trained model (`liquid_ppo_drone_final.zip`) to your Model Repository (defined in `train_hf.py` or via arguments).
|
| 51 |
+
- You can then download this model and use it locally with `demo_3d.py`.
|
| 52 |
+
|
| 53 |
+
## Customization
|
| 54 |
+
- **Repo ID:** Edit `Dockerfile` or `train_hf.py` to change the target Model Repository ID (`--repo_id`).
|
| 55 |
+
- **Steps:** Change `--steps` in `Dockerfile` to adjust training duration.
|
| 56 |
+
|
| 57 |
+
## Hardware & Training Recommendations
|
| 58 |
+
|
| 59 |
+
### Which GPU?
|
| 60 |
+
* **A100 Large (80GB):** **The Ultimate Choice.** If you want to train for 5M+ episodes in the shortest time possible, pick this. We have optimized the code to use **16 Parallel Environments** and **Large Batch Sizes (4096)** to fully saturate the A100.
|
| 61 |
+
* **A10G Large (24GB):** **Excellent Value.** Very fast and capable. It will handle the parallel training easily and is much cheaper than the A100.
|
| 62 |
+
* **T4 (16GB):** **Budget Option.** It will work, but you won't see the massive speedup from the parallelization as clearly as with the Ampere cards (A10/A100).
|
| 63 |
+
|
| 64 |
+
### Efficiency Optimization (Implemented)
|
| 65 |
+
To ensure the GPU doesn't sit idle, we have updated `train_hf.py` to:
|
| 66 |
+
1. **Parallel Physics:** Run **16 Drones** simultaneously on the CPU.
|
| 67 |
+
2. **Large Batches:** Process **4096 samples** at once on the GPU.
|
| 68 |
+
3. **Result:** Training is ~10-15x faster than the standard script.
|
| 69 |
+
|
| 70 |
+
### How Many Episodes?
|
| 71 |
+
The environment `max_steps` is 1000.
|
| 72 |
+
* **Minimum (Proof of Concept):** **500,000 Steps** (500 Episodes). The drone will learn to hover and roughly follow the target.
|
| 73 |
+
* **Recommended (Robust):** **1,000,000 - 2,000,000 Steps** (1000 - 2000 Episodes). This allows the Liquid Network to fully adapt to the random wind turbulence and master the physics.
|
| 74 |
+
* **High Performance:** **5,000,000+ Steps**. For "perfect" flight control.
|
| 75 |
+
|
| 76 |
+
### Efficiency Tip
|
| 77 |
+
Reinforcement Learning is often CPU-bound (physics simulation). To train efficiently:
|
| 78 |
+
1. Use a Space with **many CPU vCores** (8+) to run environments in parallel.
|
| 79 |
+
2. Use the **A10G** GPU to handle the heavy math of the Liquid Time-Constant (LTC) cells.
|
debug_imports.py
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import sys
|
| 2 |
+
print(f"Python version: {sys.version}")
|
| 3 |
+
try:
|
| 4 |
+
import numpy
|
| 5 |
+
print("Numpy imported")
|
| 6 |
+
except ImportError as e:
|
| 7 |
+
print(f"Numpy failed: {e}")
|
| 8 |
+
|
| 9 |
+
try:
|
| 10 |
+
import gymnasium
|
| 11 |
+
print("Gymnasium imported")
|
| 12 |
+
except ImportError as e:
|
| 13 |
+
print(f"Gymnasium failed: {e}")
|
| 14 |
+
|
| 15 |
+
try:
|
| 16 |
+
import PyFlyt
|
| 17 |
+
print("PyFlyt imported")
|
| 18 |
+
except ImportError as e:
|
| 19 |
+
print(f"PyFlyt failed: {e}")
|
| 20 |
+
|
| 21 |
+
try:
|
| 22 |
+
import opensimplex
|
| 23 |
+
print("Opensimplex imported")
|
| 24 |
+
except ImportError as e:
|
| 25 |
+
print(f"Opensimplex failed: {e}")
|
| 26 |
+
|
| 27 |
+
try:
|
| 28 |
+
import ncps
|
| 29 |
+
print("Ncps imported")
|
| 30 |
+
except ImportError as e:
|
| 31 |
+
print(f"Ncps failed: {e}")
|
demo/__init__.py
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Demo and visualization scripts."""
|
| 2 |
+
|
demo/visualize_drone.py
ADDED
|
@@ -0,0 +1,425 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Visual demonstration of the drone environment using Pygame.
|
| 3 |
+
|
| 4 |
+
This script loads a trained model and visualizes the drone navigating
|
| 5 |
+
through the environment with wind forces.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
import os
|
| 9 |
+
import sys
|
| 10 |
+
import pygame
|
| 11 |
+
import numpy as np
|
| 12 |
+
from typing import Optional
|
| 13 |
+
|
| 14 |
+
# Add project root to path
|
| 15 |
+
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 16 |
+
|
| 17 |
+
from env.drone_env import DroneWindEnv
|
| 18 |
+
from stable_baselines3 import PPO
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
# Pygame constants
|
| 22 |
+
WINDOW_WIDTH = 800
|
| 23 |
+
WINDOW_HEIGHT = 600
|
| 24 |
+
FPS = 30
|
| 25 |
+
|
| 26 |
+
# Color definitions
|
| 27 |
+
BLACK = (0, 0, 0)
|
| 28 |
+
WHITE = (255, 255, 255)
|
| 29 |
+
RED = (255, 0, 0)
|
| 30 |
+
GREEN = (0, 255, 0)
|
| 31 |
+
BLUE = (0, 0, 255)
|
| 32 |
+
YELLOW = (255, 255, 0)
|
| 33 |
+
CYAN = (0, 255, 255)
|
| 34 |
+
MAGENTA = (255, 0, 255)
|
| 35 |
+
GRAY = (128, 128, 128)
|
| 36 |
+
DARK_GRAY = (64, 64, 64)
|
| 37 |
+
ORANGE = (255, 165, 0)
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
class DroneVisualizer:
|
| 41 |
+
"""Pygame-based visualizer for the drone environment."""
|
| 42 |
+
|
| 43 |
+
def __init__(self, env: DroneWindEnv, model: Optional[PPO] = None):
|
| 44 |
+
"""
|
| 45 |
+
Initialize the visualizer.
|
| 46 |
+
|
| 47 |
+
Args:
|
| 48 |
+
env: DroneWindEnv instance
|
| 49 |
+
model: Optional trained PPO model (if None, uses random actions)
|
| 50 |
+
"""
|
| 51 |
+
self.env = env
|
| 52 |
+
self.model = model
|
| 53 |
+
|
| 54 |
+
# Initialize Pygame
|
| 55 |
+
pygame.init()
|
| 56 |
+
self.screen = pygame.display.set_mode((WINDOW_WIDTH, WINDOW_HEIGHT))
|
| 57 |
+
pygame.display.set_caption("Drone RL - Visual Demonstration")
|
| 58 |
+
self.clock = pygame.time.Clock()
|
| 59 |
+
self.font = pygame.font.Font(None, 24)
|
| 60 |
+
self.small_font = pygame.font.Font(None, 18)
|
| 61 |
+
|
| 62 |
+
# World to screen scaling
|
| 63 |
+
# Environment is [0, 1] x [0, 1], we'll use most of the screen
|
| 64 |
+
self.world_margin = 50
|
| 65 |
+
self.world_width = WINDOW_WIDTH - 2 * self.world_margin
|
| 66 |
+
self.world_height = WINDOW_HEIGHT - 2 * self.world_margin
|
| 67 |
+
|
| 68 |
+
def world_to_screen(self, x: float, y: float) -> tuple[int, int]:
|
| 69 |
+
"""Convert world coordinates [0,1] to screen coordinates."""
|
| 70 |
+
screen_x = int(self.world_margin + x * self.world_width)
|
| 71 |
+
# Flip y-axis (world y=0 is bottom, screen y=0 is top)
|
| 72 |
+
screen_y = int(WINDOW_HEIGHT - self.world_margin - y * self.world_height)
|
| 73 |
+
return screen_x, screen_y
|
| 74 |
+
|
| 75 |
+
def draw_drone(self, x: float, y: float, vx: float, vy: float):
|
| 76 |
+
"""Draw the drone as a circle with velocity vector."""
|
| 77 |
+
screen_x, screen_y = self.world_to_screen(x, y)
|
| 78 |
+
|
| 79 |
+
# Draw drone body (circle)
|
| 80 |
+
drone_radius = 15
|
| 81 |
+
pygame.draw.circle(self.screen, CYAN, (screen_x, screen_y), drone_radius)
|
| 82 |
+
pygame.draw.circle(self.screen, BLUE, (screen_x, screen_y), drone_radius, 2)
|
| 83 |
+
|
| 84 |
+
# Draw velocity vector
|
| 85 |
+
if abs(vx) > 0.01 or abs(vy) > 0.01:
|
| 86 |
+
# Scale velocity for visualization
|
| 87 |
+
scale = 30
|
| 88 |
+
end_x = screen_x + int(vx * scale)
|
| 89 |
+
end_y = screen_y - int(vy * scale) # Flip y for screen
|
| 90 |
+
pygame.draw.line(self.screen, YELLOW, (screen_x, screen_y), (end_x, end_y), 3)
|
| 91 |
+
# Draw arrowhead
|
| 92 |
+
if abs(vx) > 0.01 or abs(vy) > 0.01:
|
| 93 |
+
angle = np.arctan2(-vy, vx) # Negative vy because screen y is flipped
|
| 94 |
+
arrow_size = 8
|
| 95 |
+
arrow_x1 = end_x - arrow_size * np.cos(angle - np.pi / 6)
|
| 96 |
+
arrow_y1 = end_y - arrow_size * np.sin(angle - np.pi / 6)
|
| 97 |
+
arrow_x2 = end_x - arrow_size * np.cos(angle + np.pi / 6)
|
| 98 |
+
arrow_y2 = end_y - arrow_size * np.sin(angle + np.pi / 6)
|
| 99 |
+
pygame.draw.line(self.screen, YELLOW, (end_x, end_y), (int(arrow_x1), int(arrow_y1)), 2)
|
| 100 |
+
pygame.draw.line(self.screen, YELLOW, (end_x, end_y), (int(arrow_x2), int(arrow_y2)), 2)
|
| 101 |
+
|
| 102 |
+
def draw_wind(self, wind_x: float, wind_y: float):
|
| 103 |
+
"""Draw wind arrows indicating direction."""
|
| 104 |
+
# Draw fewer, clearer wind arrows
|
| 105 |
+
grid_size = 6
|
| 106 |
+
for i in range(grid_size):
|
| 107 |
+
for j in range(grid_size):
|
| 108 |
+
x = (i + 0.5) / grid_size
|
| 109 |
+
y = (j + 0.5) / grid_size
|
| 110 |
+
screen_x, screen_y = self.world_to_screen(x, y)
|
| 111 |
+
|
| 112 |
+
# Draw wind arrow
|
| 113 |
+
if abs(wind_x) > 0.01 or abs(wind_y) > 0.01:
|
| 114 |
+
scale = 25
|
| 115 |
+
end_x = screen_x + int(wind_x * scale)
|
| 116 |
+
end_y = screen_y - int(wind_y * scale) # Flip y
|
| 117 |
+
|
| 118 |
+
# Color based on wind strength
|
| 119 |
+
wind_strength = abs(wind_x) + abs(wind_y)
|
| 120 |
+
if wind_strength < 1.0:
|
| 121 |
+
color = GREEN
|
| 122 |
+
elif wind_strength < 1.5:
|
| 123 |
+
color = YELLOW
|
| 124 |
+
else:
|
| 125 |
+
color = ORANGE
|
| 126 |
+
|
| 127 |
+
# Draw arrow line
|
| 128 |
+
pygame.draw.line(self.screen, color, (screen_x, screen_y), (end_x, end_y), 3)
|
| 129 |
+
|
| 130 |
+
# Draw arrowhead
|
| 131 |
+
if abs(wind_x) > 0.01 or abs(wind_y) > 0.01:
|
| 132 |
+
angle = np.arctan2(-wind_y, wind_x) # Negative y because screen y is flipped
|
| 133 |
+
arrow_size = 10
|
| 134 |
+
arrow_x1 = end_x - arrow_size * np.cos(angle - np.pi / 6)
|
| 135 |
+
arrow_y1 = end_y - arrow_size * np.sin(angle - np.pi / 6)
|
| 136 |
+
arrow_x2 = end_x - arrow_size * np.cos(angle + np.pi / 6)
|
| 137 |
+
arrow_y2 = end_y - arrow_size * np.sin(angle + np.pi / 6)
|
| 138 |
+
pygame.draw.polygon(self.screen, color, [
|
| 139 |
+
(end_x, end_y),
|
| 140 |
+
(int(arrow_x1), int(arrow_y1)),
|
| 141 |
+
(int(arrow_x2), int(arrow_y2))
|
| 142 |
+
])
|
| 143 |
+
|
| 144 |
+
def draw_boundaries(self):
|
| 145 |
+
"""Draw the world boundaries."""
|
| 146 |
+
# Top boundary
|
| 147 |
+
top_left = self.world_to_screen(0, 1)
|
| 148 |
+
top_right = self.world_to_screen(1, 1)
|
| 149 |
+
pygame.draw.line(self.screen, RED, top_left, top_right, 3)
|
| 150 |
+
|
| 151 |
+
# Bottom boundary
|
| 152 |
+
bot_left = self.world_to_screen(0, 0)
|
| 153 |
+
bot_right = self.world_to_screen(1, 0)
|
| 154 |
+
pygame.draw.line(self.screen, RED, bot_left, bot_right, 3)
|
| 155 |
+
|
| 156 |
+
# Left boundary
|
| 157 |
+
pygame.draw.line(self.screen, RED, top_left, bot_left, 3)
|
| 158 |
+
|
| 159 |
+
# Right boundary
|
| 160 |
+
pygame.draw.line(self.screen, RED, top_right, bot_right, 3)
|
| 161 |
+
|
| 162 |
+
def draw_target_zone(self, target_spawned: bool = True):
|
| 163 |
+
"""Draw the target zone (box) that the drone needs to reach."""
|
| 164 |
+
from env.drone_env import TARGET_X_MIN, TARGET_X_MAX, TARGET_Y_MIN, TARGET_Y_MAX, TARGET_SPAWN_DELAY
|
| 165 |
+
|
| 166 |
+
# Only draw if target has spawned
|
| 167 |
+
if not target_spawned:
|
| 168 |
+
return
|
| 169 |
+
|
| 170 |
+
# Get screen coordinates for target zone corners
|
| 171 |
+
top_left = self.world_to_screen(TARGET_X_MIN, TARGET_Y_MAX)
|
| 172 |
+
top_right = self.world_to_screen(TARGET_X_MAX, TARGET_Y_MAX)
|
| 173 |
+
bot_left = self.world_to_screen(TARGET_X_MIN, TARGET_Y_MIN)
|
| 174 |
+
bot_right = self.world_to_screen(TARGET_X_MAX, TARGET_Y_MIN)
|
| 175 |
+
|
| 176 |
+
# Draw target zone as a semi-transparent box
|
| 177 |
+
# Create a surface for transparency
|
| 178 |
+
target_surface = pygame.Surface((WINDOW_WIDTH, WINDOW_HEIGHT))
|
| 179 |
+
target_surface.set_alpha(100) # Semi-transparent
|
| 180 |
+
|
| 181 |
+
# Draw filled rectangle
|
| 182 |
+
rect = pygame.Rect(
|
| 183 |
+
top_left[0], top_left[1],
|
| 184 |
+
top_right[0] - top_left[0],
|
| 185 |
+
bot_left[1] - top_left[1]
|
| 186 |
+
)
|
| 187 |
+
pygame.draw.rect(target_surface, MAGENTA, rect)
|
| 188 |
+
self.screen.blit(target_surface, (0, 0))
|
| 189 |
+
|
| 190 |
+
# Draw border
|
| 191 |
+
pygame.draw.line(self.screen, MAGENTA, top_left, top_right, 3)
|
| 192 |
+
pygame.draw.line(self.screen, MAGENTA, top_right, bot_right, 3)
|
| 193 |
+
pygame.draw.line(self.screen, MAGENTA, bot_right, bot_left, 3)
|
| 194 |
+
pygame.draw.line(self.screen, MAGENTA, bot_left, top_left, 3)
|
| 195 |
+
|
| 196 |
+
# Draw label
|
| 197 |
+
label_x = (top_left[0] + top_right[0]) // 2
|
| 198 |
+
label_y = (top_left[1] + bot_left[1]) // 2
|
| 199 |
+
text = self.small_font.render("TARGET", True, WHITE)
|
| 200 |
+
text_rect = text.get_rect(center=(label_x, label_y))
|
| 201 |
+
self.screen.blit(text, text_rect)
|
| 202 |
+
|
| 203 |
+
def draw_info(self, step: int, reward: float, action: Optional[int] = None, in_target: bool = False):
|
| 204 |
+
"""Draw information text."""
|
| 205 |
+
y_offset = 10
|
| 206 |
+
|
| 207 |
+
# Step count
|
| 208 |
+
text = self.font.render(f"Step: {step}", True, WHITE)
|
| 209 |
+
self.screen.blit(text, (10, y_offset))
|
| 210 |
+
y_offset += 30
|
| 211 |
+
|
| 212 |
+
# Reward
|
| 213 |
+
text = self.font.render(f"Reward: {reward:.2f}", True, WHITE)
|
| 214 |
+
self.screen.blit(text, (10, y_offset))
|
| 215 |
+
y_offset += 30
|
| 216 |
+
|
| 217 |
+
# In target zone status
|
| 218 |
+
target_color = GREEN if in_target else GRAY
|
| 219 |
+
target_text = "IN TARGET ZONE!" if in_target else "Not in target"
|
| 220 |
+
text = self.font.render(target_text, True, target_color)
|
| 221 |
+
self.screen.blit(text, (10, y_offset))
|
| 222 |
+
y_offset += 30
|
| 223 |
+
|
| 224 |
+
# Position
|
| 225 |
+
text = self.small_font.render(f"Position: ({self.env.x:.2f}, {self.env.y:.2f})", True, WHITE)
|
| 226 |
+
self.screen.blit(text, (10, y_offset))
|
| 227 |
+
y_offset += 25
|
| 228 |
+
|
| 229 |
+
# Velocity
|
| 230 |
+
text = self.small_font.render(f"Velocity: ({self.env.vx:.2f}, {self.env.vy:.2f})", True, WHITE)
|
| 231 |
+
self.screen.blit(text, (10, y_offset))
|
| 232 |
+
y_offset += 25
|
| 233 |
+
|
| 234 |
+
# Wind
|
| 235 |
+
text = self.small_font.render(f"Wind: ({self.env.wind_x:.2f}, {self.env.wind_y:.2f})", True, GREEN)
|
| 236 |
+
self.screen.blit(text, (10, y_offset))
|
| 237 |
+
y_offset += 25
|
| 238 |
+
|
| 239 |
+
# Action
|
| 240 |
+
if action is not None:
|
| 241 |
+
action_names = ["No thrust", "Up", "Down", "Left", "Right"]
|
| 242 |
+
text = self.small_font.render(f"Action: {action_names[action]}", True, YELLOW)
|
| 243 |
+
self.screen.blit(text, (10, y_offset))
|
| 244 |
+
y_offset += 25
|
| 245 |
+
|
| 246 |
+
# Model info
|
| 247 |
+
if self.model is not None:
|
| 248 |
+
text = self.small_font.render("Mode: AI Agent (Liquid NN)", True, CYAN)
|
| 249 |
+
else:
|
| 250 |
+
text = self.small_font.render("Mode: Random Actions", True, GRAY)
|
| 251 |
+
self.screen.blit(text, (10, y_offset))
|
| 252 |
+
|
| 253 |
+
def run(self, max_steps: int = 500, speed: float = 1.0):
|
| 254 |
+
"""
|
| 255 |
+
Run the visualization.
|
| 256 |
+
|
| 257 |
+
Args:
|
| 258 |
+
max_steps: Maximum number of steps to run
|
| 259 |
+
speed: Speed multiplier (1.0 = normal, higher = faster)
|
| 260 |
+
"""
|
| 261 |
+
obs, info = self.env.reset()
|
| 262 |
+
done = False
|
| 263 |
+
truncated = False
|
| 264 |
+
step_count = 0
|
| 265 |
+
action = None
|
| 266 |
+
|
| 267 |
+
running = True
|
| 268 |
+
paused = False
|
| 269 |
+
|
| 270 |
+
while running and step_count < max_steps:
|
| 271 |
+
# Handle events
|
| 272 |
+
for event in pygame.event.get():
|
| 273 |
+
if event.type == pygame.QUIT:
|
| 274 |
+
running = False
|
| 275 |
+
elif event.type == pygame.KEYDOWN:
|
| 276 |
+
if event.key == pygame.K_SPACE:
|
| 277 |
+
paused = not paused
|
| 278 |
+
elif event.key == pygame.K_r:
|
| 279 |
+
# Reset
|
| 280 |
+
obs, info = self.env.reset()
|
| 281 |
+
done = False
|
| 282 |
+
truncated = False
|
| 283 |
+
step_count = 0
|
| 284 |
+
elif event.key == pygame.K_ESCAPE:
|
| 285 |
+
running = False
|
| 286 |
+
|
| 287 |
+
if not paused and not done and not truncated:
|
| 288 |
+
# Get action
|
| 289 |
+
if self.model is not None:
|
| 290 |
+
action, _ = self.model.predict(obs, deterministic=True)
|
| 291 |
+
else:
|
| 292 |
+
action = self.env.action_space.sample()
|
| 293 |
+
|
| 294 |
+
# Step environment
|
| 295 |
+
obs, reward, done, truncated, info = self.env.step(action)
|
| 296 |
+
step_count += 1
|
| 297 |
+
in_target = info.get("in_target", False)
|
| 298 |
+
target_spawned = info.get("target_spawned", False)
|
| 299 |
+
|
| 300 |
+
# Draw everything
|
| 301 |
+
self.screen.fill(BLACK)
|
| 302 |
+
|
| 303 |
+
# Draw boundaries
|
| 304 |
+
self.draw_boundaries()
|
| 305 |
+
|
| 306 |
+
# Draw target zone (only if spawned)
|
| 307 |
+
target_spawned_current = info.get("target_spawned", self.env.step_count >= 50) if not paused else False
|
| 308 |
+
self.draw_target_zone(target_spawned=target_spawned_current)
|
| 309 |
+
|
| 310 |
+
# Draw wind arrows
|
| 311 |
+
self.draw_wind(self.env.wind_x, self.env.wind_y)
|
| 312 |
+
|
| 313 |
+
# Draw drone
|
| 314 |
+
self.draw_drone(self.env.x, self.env.y, self.env.vx, self.env.vy)
|
| 315 |
+
|
| 316 |
+
# Get in_target from info if available, otherwise compute
|
| 317 |
+
if not paused and 'in_target' in locals():
|
| 318 |
+
current_in_target = in_target
|
| 319 |
+
else:
|
| 320 |
+
from env.drone_env import TARGET_X_MIN, TARGET_X_MAX, TARGET_Y_MIN, TARGET_Y_MAX
|
| 321 |
+
current_in_target = (
|
| 322 |
+
TARGET_X_MIN <= self.env.x <= TARGET_X_MAX and
|
| 323 |
+
TARGET_Y_MIN <= self.env.y <= TARGET_Y_MAX
|
| 324 |
+
)
|
| 325 |
+
|
| 326 |
+
# Draw info
|
| 327 |
+
self.draw_info(step_count, reward if not paused else 0, action, current_in_target)
|
| 328 |
+
|
| 329 |
+
# Draw pause indicator
|
| 330 |
+
if paused:
|
| 331 |
+
text = self.font.render("PAUSED (SPACE to resume)", True, YELLOW)
|
| 332 |
+
text_rect = text.get_rect(center=(WINDOW_WIDTH // 2, 30))
|
| 333 |
+
self.screen.blit(text, text_rect)
|
| 334 |
+
|
| 335 |
+
# Draw controls
|
| 336 |
+
controls_y = WINDOW_HEIGHT - 80
|
| 337 |
+
controls = [
|
| 338 |
+
"SPACE: Pause/Resume",
|
| 339 |
+
"R: Reset",
|
| 340 |
+
"ESC: Quit"
|
| 341 |
+
]
|
| 342 |
+
for i, control in enumerate(controls):
|
| 343 |
+
text = self.small_font.render(control, True, GRAY)
|
| 344 |
+
self.screen.blit(text, (10, controls_y + i * 20))
|
| 345 |
+
|
| 346 |
+
pygame.display.flip()
|
| 347 |
+
|
| 348 |
+
# Control speed
|
| 349 |
+
if not paused:
|
| 350 |
+
self.clock.tick(FPS * speed)
|
| 351 |
+
else:
|
| 352 |
+
self.clock.tick(10)
|
| 353 |
+
|
| 354 |
+
# Auto-reset on done/truncated
|
| 355 |
+
if (done or truncated) and not paused:
|
| 356 |
+
pygame.time.wait(1000) # Wait 1 second before reset
|
| 357 |
+
obs, info = self.env.reset()
|
| 358 |
+
done = False
|
| 359 |
+
truncated = False
|
| 360 |
+
step_count = 0
|
| 361 |
+
|
| 362 |
+
pygame.quit()
|
| 363 |
+
|
| 364 |
+
|
| 365 |
+
def main():
|
| 366 |
+
"""Main function to run the visualization."""
|
| 367 |
+
import argparse
|
| 368 |
+
|
| 369 |
+
parser = argparse.ArgumentParser(description="Visualize drone environment")
|
| 370 |
+
parser.add_argument(
|
| 371 |
+
"--model-path",
|
| 372 |
+
type=str,
|
| 373 |
+
default="models/liquid_policy.zip",
|
| 374 |
+
help="Path to trained model (default: models/liquid_policy.zip)"
|
| 375 |
+
)
|
| 376 |
+
parser.add_argument(
|
| 377 |
+
"--random",
|
| 378 |
+
action="store_true",
|
| 379 |
+
help="Use random actions instead of trained model"
|
| 380 |
+
)
|
| 381 |
+
parser.add_argument(
|
| 382 |
+
"--max-steps",
|
| 383 |
+
type=int,
|
| 384 |
+
default=500,
|
| 385 |
+
help="Maximum steps per episode (default: 500)"
|
| 386 |
+
)
|
| 387 |
+
parser.add_argument(
|
| 388 |
+
"--speed",
|
| 389 |
+
type=float,
|
| 390 |
+
default=1.0,
|
| 391 |
+
help="Animation speed multiplier (default: 1.0)"
|
| 392 |
+
)
|
| 393 |
+
|
| 394 |
+
args = parser.parse_args()
|
| 395 |
+
|
| 396 |
+
# Create environment
|
| 397 |
+
env = DroneWindEnv()
|
| 398 |
+
|
| 399 |
+
# Load model if specified
|
| 400 |
+
model = None
|
| 401 |
+
if not args.random:
|
| 402 |
+
if os.path.exists(args.model_path):
|
| 403 |
+
print(f"Loading model from {args.model_path}...")
|
| 404 |
+
model = PPO.load(args.model_path, env=env)
|
| 405 |
+
print("Model loaded successfully!")
|
| 406 |
+
else:
|
| 407 |
+
print(f"Model not found at {args.model_path}, using random actions")
|
| 408 |
+
|
| 409 |
+
# Create visualizer
|
| 410 |
+
visualizer = DroneVisualizer(env, model)
|
| 411 |
+
|
| 412 |
+
# Run visualization
|
| 413 |
+
print("\nStarting visualization...")
|
| 414 |
+
print("Controls:")
|
| 415 |
+
print(" SPACE: Pause/Resume")
|
| 416 |
+
print(" R: Reset episode")
|
| 417 |
+
print(" ESC: Quit")
|
| 418 |
+
print()
|
| 419 |
+
|
| 420 |
+
visualizer.run(max_steps=args.max_steps, speed=args.speed)
|
| 421 |
+
|
| 422 |
+
|
| 423 |
+
if __name__ == "__main__":
|
| 424 |
+
main()
|
| 425 |
+
|
demo_3d.py
ADDED
|
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import time
|
| 3 |
+
import os
|
| 4 |
+
import matplotlib.pyplot as plt
|
| 5 |
+
from mpl_toolkits.mplot3d import Axes3D
|
| 6 |
+
from env.drone_3d import Drone3DEnv
|
| 7 |
+
from models.liquid_ppo import make_liquid_ppo
|
| 8 |
+
from stable_baselines3 import PPO
|
| 9 |
+
|
| 10 |
+
def run_demo():
|
| 11 |
+
print("Initializing Project Neuro-Flyt 3D Demo (Matplotlib Mode)...")
|
| 12 |
+
|
| 13 |
+
env = Drone3DEnv(render_mode="human", wind_scale=5.0, wind_speed=2.0)
|
| 14 |
+
|
| 15 |
+
model_path = "liquid_ppo_drone_final.zip"
|
| 16 |
+
if os.path.exists(model_path):
|
| 17 |
+
print(f"Loading trained model from {model_path}...")
|
| 18 |
+
model = PPO.load(model_path, env=env)
|
| 19 |
+
else:
|
| 20 |
+
print("No trained model found. Using untrained Liquid Brain.")
|
| 21 |
+
model = make_liquid_ppo(env, verbose=1)
|
| 22 |
+
|
| 23 |
+
print("\n=== DEMO STARTING ===")
|
| 24 |
+
|
| 25 |
+
obs, info = env.reset()
|
| 26 |
+
|
| 27 |
+
# Setup Plot
|
| 28 |
+
fig = plt.figure(figsize=(10, 8))
|
| 29 |
+
ax = fig.add_subplot(111, projection='3d')
|
| 30 |
+
|
| 31 |
+
from matplotlib.animation import FuncAnimation
|
| 32 |
+
|
| 33 |
+
def update(frame):
|
| 34 |
+
nonlocal obs, info
|
| 35 |
+
action, _ = model.predict(obs, deterministic=True)
|
| 36 |
+
obs, reward, term, trunc, info = env.step(action)
|
| 37 |
+
|
| 38 |
+
ax.clear()
|
| 39 |
+
ax.set_xlim(-20, 20)
|
| 40 |
+
ax.set_ylim(-20, 20)
|
| 41 |
+
ax.set_zlim(0, 20)
|
| 42 |
+
ax.set_xlabel('X')
|
| 43 |
+
ax.set_ylabel('Y')
|
| 44 |
+
ax.set_zlabel('Z')
|
| 45 |
+
ax.set_title(f'Neuro-Flyt 3D | Step: {frame}')
|
| 46 |
+
|
| 47 |
+
pos = obs[0:3]
|
| 48 |
+
wind = info.get("wind", np.zeros(3))
|
| 49 |
+
|
| 50 |
+
# Draw Drone
|
| 51 |
+
ax.scatter(pos[0], pos[1], pos[2], c='blue', s=100, label='Drone')
|
| 52 |
+
|
| 53 |
+
# Draw Wind Vector
|
| 54 |
+
ax.quiver(pos[0], pos[1], pos[2], wind[0], wind[1], wind[2], length=1.0, color='red', label='Wind Force')
|
| 55 |
+
|
| 56 |
+
# Draw Target
|
| 57 |
+
ax.scatter(0, 0, 10, c='green', marker='x', s=100, label='Target')
|
| 58 |
+
|
| 59 |
+
ax.legend()
|
| 60 |
+
|
| 61 |
+
if term or trunc:
|
| 62 |
+
obs, info = env.reset()
|
| 63 |
+
|
| 64 |
+
print("Generating Animation (demo.gif)...")
|
| 65 |
+
anim = FuncAnimation(fig, update, frames=200, interval=50)
|
| 66 |
+
anim.save('demo.gif', writer='pillow', fps=20)
|
| 67 |
+
print("Animation saved to demo.gif")
|
| 68 |
+
|
| 69 |
+
env.close()
|
| 70 |
+
|
| 71 |
+
if __name__ == "__main__":
|
| 72 |
+
run_demo()
|
demo_interactive.py
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import time
|
| 3 |
+
import os
|
| 4 |
+
import matplotlib.pyplot as plt
|
| 5 |
+
from matplotlib.gridspec import GridSpec
|
| 6 |
+
from env.drone_3d import Drone3DEnv
|
| 7 |
+
from models.liquid_ppo import make_liquid_ppo
|
| 8 |
+
from stable_baselines3 import PPO
|
| 9 |
+
|
| 10 |
+
def run_interactive_demo():
|
| 11 |
+
print("Initializing Interactive Dashboard...")
|
| 12 |
+
|
| 13 |
+
env = Drone3DEnv(render_mode="human", wind_scale=5.0, wind_speed=2.0)
|
| 14 |
+
|
| 15 |
+
model_path = "liquid_ppo_drone_final.zip"
|
| 16 |
+
if os.path.exists(model_path):
|
| 17 |
+
print(f"Loading trained model from {model_path}...")
|
| 18 |
+
model = PPO.load(model_path, env=env)
|
| 19 |
+
else:
|
| 20 |
+
print("No trained model found. Using untrained Liquid Brain.")
|
| 21 |
+
model = make_liquid_ppo(env, verbose=1)
|
| 22 |
+
|
| 23 |
+
obs, info = env.reset()
|
| 24 |
+
|
| 25 |
+
# Setup Dashboard
|
| 26 |
+
plt.ion()
|
| 27 |
+
fig = plt.figure(figsize=(14, 8))
|
| 28 |
+
gs = GridSpec(2, 2, width_ratios=[2, 1])
|
| 29 |
+
|
| 30 |
+
# 3D View (Left, spanning both rows)
|
| 31 |
+
ax_3d = fig.add_subplot(gs[:, 0], projection='3d')
|
| 32 |
+
|
| 33 |
+
# Altitude Plot (Top Right)
|
| 34 |
+
ax_alt = fig.add_subplot(gs[0, 1])
|
| 35 |
+
ax_alt.set_title("Altitude (Z)")
|
| 36 |
+
ax_alt.set_ylim(0, 15)
|
| 37 |
+
ax_alt.set_xlim(0, 100)
|
| 38 |
+
line_alt, = ax_alt.plot([], [], 'b-')
|
| 39 |
+
|
| 40 |
+
# Wind Speed Plot (Bottom Right)
|
| 41 |
+
ax_wind = fig.add_subplot(gs[1, 1])
|
| 42 |
+
ax_wind.set_title("Wind Magnitude")
|
| 43 |
+
ax_wind.set_ylim(0, 10)
|
| 44 |
+
ax_wind.set_xlim(0, 100)
|
| 45 |
+
line_wind, = ax_wind.plot([], [], 'r-')
|
| 46 |
+
|
| 47 |
+
# Data Buffers
|
| 48 |
+
history_len = 100
|
| 49 |
+
alt_history = [10.0] * history_len
|
| 50 |
+
wind_history = [0.0] * history_len
|
| 51 |
+
|
| 52 |
+
print("\n=== DASHBOARD LIVE ===")
|
| 53 |
+
print("Close the window to exit.")
|
| 54 |
+
|
| 55 |
+
try:
|
| 56 |
+
step = 0
|
| 57 |
+
while True:
|
| 58 |
+
# Predict & Step
|
| 59 |
+
action, _ = model.predict(obs, deterministic=True)
|
| 60 |
+
obs, reward, term, trunc, info = env.step(action)
|
| 61 |
+
|
| 62 |
+
# Update Data
|
| 63 |
+
pos = obs[0:3]
|
| 64 |
+
wind = info.get("wind", np.zeros(3))
|
| 65 |
+
wind_mag = np.linalg.norm(wind)
|
| 66 |
+
|
| 67 |
+
alt_history.append(pos[2])
|
| 68 |
+
alt_history.pop(0)
|
| 69 |
+
wind_history.append(wind_mag)
|
| 70 |
+
wind_history.pop(0)
|
| 71 |
+
|
| 72 |
+
# --- Render 3D View ---
|
| 73 |
+
ax_3d.clear()
|
| 74 |
+
ax_3d.set_xlim(-20, 20)
|
| 75 |
+
ax_3d.set_ylim(-20, 20)
|
| 76 |
+
ax_3d.set_zlim(0, 20)
|
| 77 |
+
ax_3d.set_xlabel('X')
|
| 78 |
+
ax_3d.set_ylabel('Y')
|
| 79 |
+
ax_3d.set_zlabel('Z')
|
| 80 |
+
ax_3d.set_title(f'Neuro-Flyt 3D | Step: {step}')
|
| 81 |
+
|
| 82 |
+
# Drone
|
| 83 |
+
ax_3d.scatter(pos[0], pos[1], pos[2], c='blue', s=100, label='Drone')
|
| 84 |
+
# Wind Vector
|
| 85 |
+
ax_3d.quiver(pos[0], pos[1], pos[2], wind[0], wind[1], wind[2], length=1.0, color='red', label='Wind')
|
| 86 |
+
|
| 87 |
+
# Target
|
| 88 |
+
target = info.get("target", np.array([0, 0, 10.0]))
|
| 89 |
+
ax_3d.scatter(target[0], target[1], target[2], c='green', marker='x', s=100, label='Target')
|
| 90 |
+
ax_3d.legend(loc='upper left')
|
| 91 |
+
|
| 92 |
+
# --- Render Stats ---
|
| 93 |
+
line_alt.set_ydata(alt_history)
|
| 94 |
+
line_alt.set_xdata(range(history_len))
|
| 95 |
+
|
| 96 |
+
line_wind.set_ydata(wind_history)
|
| 97 |
+
line_wind.set_xdata(range(history_len))
|
| 98 |
+
|
| 99 |
+
# Stats Text
|
| 100 |
+
stats = f"Alt: {pos[2]:.2f}m\nWind: {wind_mag:.2f} N\nDrift: {np.linalg.norm(pos[:2]):.2f}m"
|
| 101 |
+
ax_3d.text2D(0.05, 0.95, stats, transform=ax_3d.transAxes, fontsize=12, bbox=dict(facecolor='white', alpha=0.7))
|
| 102 |
+
|
| 103 |
+
plt.draw()
|
| 104 |
+
plt.pause(0.01)
|
| 105 |
+
|
| 106 |
+
if term or trunc:
|
| 107 |
+
obs, info = env.reset()
|
| 108 |
+
|
| 109 |
+
step += 1
|
| 110 |
+
|
| 111 |
+
# Check if window is closed
|
| 112 |
+
if not plt.fignum_exists(fig.number):
|
| 113 |
+
break
|
| 114 |
+
|
| 115 |
+
except KeyboardInterrupt:
|
| 116 |
+
print("Interrupted.")
|
| 117 |
+
except Exception as e:
|
| 118 |
+
print(f"Error: {e}")
|
| 119 |
+
finally:
|
| 120 |
+
plt.close()
|
| 121 |
+
env.close()
|
| 122 |
+
|
| 123 |
+
if __name__ == "__main__":
|
| 124 |
+
run_interactive_demo()
|
demo_log.txt
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Initializing Project Neuro-Flyt 3D Demo (Matplotlib Mode)...
|
| 2 |
+
No trained model found. Using untrained Liquid Brain.
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
Traceback (most recent call last):
|
| 7 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 72, in <module>
|
| 8 |
+
run_demo()
|
| 9 |
+
~~~~~~~~^^
|
| 10 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 21, in run_demo
|
| 11 |
+
model = make_liquid_ppo(env, verbose=1)
|
| 12 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/models/liquid_ppo.py", line 67, in make_liquid_ppo
|
| 13 |
+
model = PPO(
|
| 14 |
+
"MlpPolicy",
|
| 15 |
+
...<9 lines>...
|
| 16 |
+
clip_range=0.2,
|
| 17 |
+
)
|
| 18 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/ppo/ppo.py", line 171, in __init__
|
| 19 |
+
self._setup_model()
|
| 20 |
+
~~~~~~~~~~~~~~~~~^^
|
| 21 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/ppo/ppo.py", line 174, in _setup_model
|
| 22 |
+
super()._setup_model()
|
| 23 |
+
~~~~~~~~~~~~~~~~~~~~^^
|
| 24 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 135, in _setup_model
|
| 25 |
+
self.policy = self.policy_class( # type: ignore[assignment]
|
| 26 |
+
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 27 |
+
self.observation_space, self.action_space, self.lr_schedule, use_sde=self.use_sde, **self.policy_kwargs
|
| 28 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 29 |
+
)
|
| 30 |
+
^
|
| 31 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 507, in __init__
|
| 32 |
+
self.features_extractor = self.make_features_extractor()
|
| 33 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
|
| 34 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 120, in make_features_extractor
|
| 35 |
+
return self.features_extractor_class(self.observation_space, **self.features_extractor_kwargs)
|
| 36 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 37 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/models/liquid_ppo.py", line 18, in __init__
|
| 38 |
+
self.features_dim = features_dim
|
| 39 |
+
^^^^^^^^^^^^^^^^^
|
| 40 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 2071, in __setattr__
|
| 41 |
+
super().__setattr__(name, value)
|
| 42 |
+
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 43 |
+
AttributeError: property 'features_dim' of 'LTCFeatureExtractor' object has no setter
|
demo_log_2.txt
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Initializing Project Neuro-Flyt 3D Demo (Matplotlib Mode)...
|
| 2 |
+
No trained model found. Using untrained Liquid Brain.
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
Traceback (most recent call last):
|
| 7 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 72, in <module>
|
| 8 |
+
run_demo()
|
| 9 |
+
~~~~~~~~^^
|
| 10 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 21, in run_demo
|
| 11 |
+
model = make_liquid_ppo(env, verbose=1)
|
| 12 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/models/liquid_ppo.py", line 67, in make_liquid_ppo
|
| 13 |
+
model = PPO(
|
| 14 |
+
"MlpPolicy",
|
| 15 |
+
...<9 lines>...
|
| 16 |
+
clip_range=0.2,
|
| 17 |
+
)
|
| 18 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/ppo/ppo.py", line 171, in __init__
|
| 19 |
+
self._setup_model()
|
| 20 |
+
~~~~~~~~~~~~~~~~~^^
|
| 21 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/ppo/ppo.py", line 174, in _setup_model
|
| 22 |
+
super()._setup_model()
|
| 23 |
+
~~~~~~~~~~~~~~~~~~~~^^
|
| 24 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 135, in _setup_model
|
| 25 |
+
self.policy = self.policy_class( # type: ignore[assignment]
|
| 26 |
+
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 27 |
+
self.observation_space, self.action_space, self.lr_schedule, use_sde=self.use_sde, **self.policy_kwargs
|
| 28 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 29 |
+
)
|
| 30 |
+
^
|
| 31 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 507, in __init__
|
| 32 |
+
self.features_extractor = self.make_features_extractor()
|
| 33 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
|
| 34 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 120, in make_features_extractor
|
| 35 |
+
return self.features_extractor_class(self.observation_space, **self.features_extractor_kwargs)
|
| 36 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 37 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/models/liquid_ppo.py", line 22, in __init__
|
| 38 |
+
wiring = AutoNCP(features_dim, output_size=features_dim)
|
| 39 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/wirings/wirings.py", line 622, in __init__
|
| 40 |
+
raise ValueError(
|
| 41 |
+
f"Output size must be less than the number of units-2 (given {units} units, {output_size} output size)"
|
| 42 |
+
)
|
| 43 |
+
ValueError: Output size must be less than the number of units-2 (given 32 units, 32 output size)
|
demo_log_3.txt
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Initializing Project Neuro-Flyt 3D Demo (Matplotlib Mode)...
|
| 2 |
+
No trained model found. Using untrained Liquid Brain.
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
|
| 7 |
+
=== DEMO STARTING ===
|
| 8 |
+
Generating Animation (demo.gif)...
|
| 9 |
+
Traceback (most recent call last):
|
| 10 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 224, in saving
|
| 11 |
+
yield self
|
| 12 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1109, in save
|
| 13 |
+
anim._init_draw() # Clear the initial frame
|
| 14 |
+
~~~~~~~~~~~~~~~^^
|
| 15 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1770, in _init_draw
|
| 16 |
+
self._draw_frame(frame_data)
|
| 17 |
+
~~~~~~~~~~~~~~~~^^^^^^^^^^^^
|
| 18 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1789, in _draw_frame
|
| 19 |
+
self._drawn_artists = self._func(framedata, *self._args)
|
| 20 |
+
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
|
| 21 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 35, in update
|
| 22 |
+
action, _ = model.predict(obs, deterministic=True)
|
| 23 |
+
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 24 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/base_class.py", line 557, in predict
|
| 25 |
+
return self.policy.predict(observation, state, episode_start, deterministic)
|
| 26 |
+
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 27 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 368, in predict
|
| 28 |
+
actions = self._predict(obs_tensor, deterministic=deterministic)
|
| 29 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 717, in _predict
|
| 30 |
+
return self.get_distribution(observation).get_actions(deterministic=deterministic)
|
| 31 |
+
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 32 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 750, in get_distribution
|
| 33 |
+
features = super().extract_features(obs, self.pi_features_extractor)
|
| 34 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 131, in extract_features
|
| 35 |
+
return features_extractor(preprocessed_obs)
|
| 36 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
|
| 37 |
+
return self._call_impl(*args, **kwargs)
|
| 38 |
+
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
| 39 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
|
| 40 |
+
return forward_call(*args, **kwargs)
|
| 41 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/models/liquid_ppo.py", line 54, in forward
|
| 42 |
+
output, self.hx = self.ltc(observations, self.hx)
|
| 43 |
+
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
|
| 44 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
|
| 45 |
+
return self._call_impl(*args, **kwargs)
|
| 46 |
+
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
| 47 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
|
| 48 |
+
return forward_call(*args, **kwargs)
|
| 49 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc.py", line 185, in forward
|
| 50 |
+
h_out, h_state = self.rnn_cell.forward(inputs, h_state, ts)
|
| 51 |
+
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
|
| 52 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc_cell.py", line 282, in forward
|
| 53 |
+
next_state = self._ode_solver(inputs, states, elapsed_time)
|
| 54 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc_cell.py", line 230, in _ode_solver
|
| 55 |
+
w_activation = w_param * self._sigmoid(
|
| 56 |
+
~~~~~~~~~~~~~^
|
| 57 |
+
v_pre, self._params["mu"], self._params["sigma"]
|
| 58 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 59 |
+
)
|
| 60 |
+
^
|
| 61 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc_cell.py", line 199, in _sigmoid
|
| 62 |
+
mues = v_pre - mu
|
| 63 |
+
~~~~~~^~~~
|
| 64 |
+
RuntimeError: The size of tensor a (32) must match the size of tensor b (48) at non-singleton dimension 1
|
| 65 |
+
|
| 66 |
+
During handling of the above exception, another exception occurred:
|
| 67 |
+
|
| 68 |
+
Traceback (most recent call last):
|
| 69 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 72, in <module>
|
| 70 |
+
run_demo()
|
| 71 |
+
~~~~~~~~^^
|
| 72 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 66, in run_demo
|
| 73 |
+
anim.save('demo.gif', writer='pillow', fps=20)
|
| 74 |
+
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 75 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1098, in save
|
| 76 |
+
with (writer.saving(self._fig, filename, dpi),
|
| 77 |
+
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 78 |
+
File "/usr/lib64/python3.14/contextlib.py", line 162, in __exit__
|
| 79 |
+
self.gen.throw(value)
|
| 80 |
+
~~~~~~~~~~~~~~^^^^^^^
|
| 81 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 226, in saving
|
| 82 |
+
self.finish()
|
| 83 |
+
~~~~~~~~~~~^^
|
| 84 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 506, in finish
|
| 85 |
+
self._frames[0].save(
|
| 86 |
+
~~~~~~~~~~~~^^^
|
| 87 |
+
IndexError: list index out of range
|
demo_log_4.txt
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Initializing Project Neuro-Flyt 3D Demo (Matplotlib Mode)...
|
| 2 |
+
No trained model found. Using untrained Liquid Brain.
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
|
| 7 |
+
=== DEMO STARTING ===
|
| 8 |
+
Generating Animation (demo.gif)...
|
| 9 |
+
Traceback (most recent call last):
|
| 10 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 224, in saving
|
| 11 |
+
yield self
|
| 12 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1109, in save
|
| 13 |
+
anim._init_draw() # Clear the initial frame
|
| 14 |
+
~~~~~~~~~~~~~~~^^
|
| 15 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1770, in _init_draw
|
| 16 |
+
self._draw_frame(frame_data)
|
| 17 |
+
~~~~~~~~~~~~~~~~^^^^^^^^^^^^
|
| 18 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1789, in _draw_frame
|
| 19 |
+
self._drawn_artists = self._func(framedata, *self._args)
|
| 20 |
+
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
|
| 21 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 35, in update
|
| 22 |
+
action, _ = model.predict(obs, deterministic=True)
|
| 23 |
+
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 24 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/base_class.py", line 557, in predict
|
| 25 |
+
return self.policy.predict(observation, state, episode_start, deterministic)
|
| 26 |
+
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 27 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 368, in predict
|
| 28 |
+
actions = self._predict(obs_tensor, deterministic=deterministic)
|
| 29 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 717, in _predict
|
| 30 |
+
return self.get_distribution(observation).get_actions(deterministic=deterministic)
|
| 31 |
+
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 32 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 750, in get_distribution
|
| 33 |
+
features = super().extract_features(obs, self.pi_features_extractor)
|
| 34 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/policies.py", line 131, in extract_features
|
| 35 |
+
return features_extractor(preprocessed_obs)
|
| 36 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
|
| 37 |
+
return self._call_impl(*args, **kwargs)
|
| 38 |
+
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
| 39 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
|
| 40 |
+
return forward_call(*args, **kwargs)
|
| 41 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/models/liquid_ppo.py", line 54, in forward
|
| 42 |
+
output, self.hx = self.ltc(observations, self.hx)
|
| 43 |
+
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
|
| 44 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
|
| 45 |
+
return self._call_impl(*args, **kwargs)
|
| 46 |
+
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
| 47 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
|
| 48 |
+
return forward_call(*args, **kwargs)
|
| 49 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc.py", line 185, in forward
|
| 50 |
+
h_out, h_state = self.rnn_cell.forward(inputs, h_state, ts)
|
| 51 |
+
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
|
| 52 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc_cell.py", line 282, in forward
|
| 53 |
+
next_state = self._ode_solver(inputs, states, elapsed_time)
|
| 54 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc_cell.py", line 230, in _ode_solver
|
| 55 |
+
w_activation = w_param * self._sigmoid(
|
| 56 |
+
~~~~~~~~~~~~~^
|
| 57 |
+
v_pre, self._params["mu"], self._params["sigma"]
|
| 58 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 59 |
+
)
|
| 60 |
+
^
|
| 61 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/ncps/torch/ltc_cell.py", line 199, in _sigmoid
|
| 62 |
+
mues = v_pre - mu
|
| 63 |
+
~~~~~~^~~~
|
| 64 |
+
RuntimeError: The size of tensor a (32) must match the size of tensor b (48) at non-singleton dimension 1
|
| 65 |
+
|
| 66 |
+
During handling of the above exception, another exception occurred:
|
| 67 |
+
|
| 68 |
+
Traceback (most recent call last):
|
| 69 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 72, in <module>
|
| 70 |
+
run_demo()
|
| 71 |
+
~~~~~~~~^^
|
| 72 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/demo_3d.py", line 66, in run_demo
|
| 73 |
+
anim.save('demo.gif', writer='pillow', fps=20)
|
| 74 |
+
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 75 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 1098, in save
|
| 76 |
+
with (writer.saving(self._fig, filename, dpi),
|
| 77 |
+
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 78 |
+
File "/usr/lib64/python3.14/contextlib.py", line 162, in __exit__
|
| 79 |
+
self.gen.throw(value)
|
| 80 |
+
~~~~~~~~~~~~~~^^^^^^^
|
| 81 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 226, in saving
|
| 82 |
+
self.finish()
|
| 83 |
+
~~~~~~~~~~~^^
|
| 84 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/matplotlib/animation.py", line 506, in finish
|
| 85 |
+
self._frames[0].save(
|
| 86 |
+
~~~~~~~~~~~~^^^
|
| 87 |
+
IndexError: list index out of range
|
demo_log_5.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Initializing Project Neuro-Flyt 3D Demo (Matplotlib Mode)...
|
| 2 |
+
No trained model found. Using untrained Liquid Brain.
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
|
| 7 |
+
=== DEMO STARTING ===
|
| 8 |
+
Generating Animation (demo.gif)...
|
| 9 |
+
Animation saved to demo.gif
|
deploy_log.txt
ADDED
|
@@ -0,0 +1,254 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
536 100% 0.00kB/s 0:00:00
|
| 1 |
536 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=76/78)
|
|
|
|
|
|
|
| 2 |
622 100% 607.42kB/s 0:00:00
|
| 3 |
622 100% 607.42kB/s 0:00:00 (xfr#2, to-chk=75/78)
|
|
|
|
|
|
|
| 4 |
924 100% 902.34kB/s 0:00:00
|
| 5 |
924 100% 902.34kB/s 0:00:00 (xfr#3, to-chk=74/78)
|
|
|
|
|
|
|
| 6 |
3,598 100% 3.43MB/s 0:00:00
|
| 7 |
3,598 100% 3.43MB/s 0:00:00 (xfr#4, to-chk=73/78)
|
|
|
|
|
|
|
| 8 |
620 100% 605.47kB/s 0:00:00
|
| 9 |
620 100% 605.47kB/s 0:00:00 (xfr#5, to-chk=72/78)
|
|
|
|
|
|
|
| 10 |
32,768 4% 31.25MB/s 0:00:00
|
| 11 |
692,490 100% 330.20MB/s 0:00:00 (xfr#6, to-chk=71/78)
|
|
|
|
|
|
|
| 12 |
2,198 100% 715.49kB/s 0:00:00
|
| 13 |
2,198 100% 715.49kB/s 0:00:00 (xfr#7, to-chk=70/78)
|
|
|
|
|
|
|
| 14 |
4,050 100% 1.29MB/s 0:00:00
|
| 15 |
4,050 100% 1.29MB/s 0:00:00 (xfr#8, to-chk=69/78)
|
|
|
|
|
|
|
| 16 |
2,580 100% 839.84kB/s 0:00:00
|
| 17 |
2,580 100% 839.84kB/s 0:00:00 (xfr#9, to-chk=68/78)
|
|
|
|
|
|
|
| 18 |
2,652 100% 863.28kB/s 0:00:00
|
| 19 |
2,652 100% 863.28kB/s 0:00:00 (xfr#10, to-chk=67/78)
|
|
|
|
|
|
|
| 20 |
5,276 100% 1.68MB/s 0:00:00
|
| 21 |
5,276 100% 1.68MB/s 0:00:00 (xfr#11, to-chk=66/78)
|
|
|
|
|
|
|
| 22 |
5,276 100% 1.68MB/s 0:00:00
|
| 23 |
5,276 100% 1.68MB/s 0:00:00 (xfr#12, to-chk=65/78)
|
|
|
|
|
|
|
| 24 |
295 100% 96.03kB/s 0:00:00
|
| 25 |
295 100% 96.03kB/s 0:00:00 (xfr#13, to-chk=64/78)
|
|
|
|
|
|
|
| 26 |
1,740 100% 566.41kB/s 0:00:00
|
| 27 |
1,740 100% 566.41kB/s 0:00:00 (xfr#14, to-chk=63/78)
|
|
|
|
|
|
|
| 28 |
14,311 100% 4.55MB/s 0:00:00
|
| 29 |
14,311 100% 4.55MB/s 0:00:00 (xfr#15, to-chk=62/78)
|
|
|
|
|
|
|
| 30 |
236 100% 76.82kB/s 0:00:00
|
| 31 |
236 100% 76.82kB/s 0:00:00 (xfr#16, to-chk=61/78)
|
|
|
|
|
|
|
| 32 |
4,348 100% 1.38MB/s 0:00:00
|
| 33 |
4,348 100% 1.38MB/s 0:00:00 (xfr#17, to-chk=60/78)
|
|
|
|
|
|
|
| 34 |
281 100% 91.47kB/s 0:00:00
|
| 35 |
281 100% 91.47kB/s 0:00:00 (xfr#18, to-chk=59/78)
|
|
|
|
|
|
|
| 36 |
5,014 100% 1.59MB/s 0:00:00
|
| 37 |
5,014 100% 1.59MB/s 0:00:00 (xfr#19, to-chk=58/78)
|
|
|
|
|
|
|
| 38 |
223 100% 72.59kB/s 0:00:00
|
| 39 |
223 100% 72.59kB/s 0:00:00 (xfr#20, to-chk=57/78)
|
|
|
|
|
|
|
| 40 |
32,768 25% 10.42MB/s 0:00:00
|
| 41 |
129,509 100% 30.88MB/s 0:00:00 (xfr#21, to-chk=56/78)
|
|
|
|
|
|
|
| 42 |
219 100% 53.47kB/s 0:00:00
|
| 43 |
219 100% 53.47kB/s 0:00:00 (xfr#22, to-chk=55/78)
|
|
|
|
|
|
|
| 44 |
194 100% 47.36kB/s 0:00:00
|
| 45 |
194 100% 47.36kB/s 0:00:00 (xfr#23, to-chk=54/78)
|
|
|
|
|
|
|
| 46 |
178 100% 43.46kB/s 0:00:00
|
| 47 |
178 100% 43.46kB/s 0:00:00 (xfr#24, to-chk=53/78)
|
|
|
|
|
|
|
| 48 |
978 100% 238.77kB/s 0:00:00
|
| 49 |
978 100% 238.77kB/s 0:00:00 (xfr#25, to-chk=52/78)
|
|
|
|
|
|
|
| 50 |
964 100% 235.35kB/s 0:00:00
|
| 51 |
964 100% 235.35kB/s 0:00:00 (xfr#26, to-chk=51/78)
|
|
|
|
|
|
|
| 52 |
1,311 100% 320.07kB/s 0:00:00
|
| 53 |
1,311 100% 320.07kB/s 0:00:00 (xfr#27, to-chk=50/78)
|
|
|
|
|
|
|
| 54 |
318 100% 77.64kB/s 0:00:00
|
| 55 |
318 100% 77.64kB/s 0:00:00 (xfr#28, to-chk=49/78)
|
|
|
|
|
|
|
| 56 |
315 100% 76.90kB/s 0:00:00
|
| 57 |
315 100% 76.90kB/s 0:00:00 (xfr#29, to-chk=48/78)
|
|
|
|
|
|
|
| 58 |
263 100% 64.21kB/s 0:00:00
|
| 59 |
263 100% 64.21kB/s 0:00:00 (xfr#30, to-chk=47/78)
|
|
|
|
|
|
|
| 60 |
263 100% 64.21kB/s 0:00:00
|
| 61 |
263 100% 64.21kB/s 0:00:00 (xfr#31, to-chk=46/78)
|
|
|
|
|
|
|
| 62 |
1,205 100% 294.19kB/s 0:00:00
|
| 63 |
1,205 100% 294.19kB/s 0:00:00 (xfr#32, to-chk=45/78)
|
|
|
|
|
|
|
| 64 |
2,277 100% 555.91kB/s 0:00:00
|
| 65 |
2,277 100% 555.91kB/s 0:00:00 (xfr#33, to-chk=44/78)
|
|
|
|
|
|
|
| 66 |
32,768 39% 7.81MB/s 0:00:00
|
| 67 |
83,713 100% 19.96MB/s 0:00:00 (xfr#34, to-chk=43/78)
|
|
|
|
|
|
|
| 68 |
3,044 100% 743.16kB/s 0:00:00
|
| 69 |
3,044 100% 743.16kB/s 0:00:00 (xfr#35, to-chk=42/78)
|
|
|
|
|
|
|
| 70 |
14,709 100% 3.51MB/s 0:00:00
|
| 71 |
14,709 100% 3.51MB/s 0:00:00 (xfr#36, to-chk=41/78)
|
|
|
|
|
|
|
|
|
|
| 72 |
32,768 9% 6.25MB/s 0:00:00
|
| 73 |
361,759 100% 57.50MB/s 0:00:00 (xfr#37, to-chk=33/78)
|
|
|
|
|
|
|
| 74 |
32,768 9% 4.46MB/s 0:00:00
|
| 75 |
358,037 100% 48.78MB/s 0:00:00 (xfr#38, to-chk=32/78)
|
|
|
|
|
|
|
| 76 |
32,768 9% 3.91MB/s 0:00:00
|
| 77 |
361,803 100% 38.34MB/s 0:00:00 (xfr#39, to-chk=31/78)
|
|
|
|
|
|
|
| 78 |
32,768 9% 3.12MB/s 0:00:00
|
| 79 |
361,803 100% 34.50MB/s 0:00:00 (xfr#40, to-chk=30/78)
|
|
|
|
|
|
|
| 80 |
32,768 9% 2.60MB/s 0:00:00
|
| 81 |
361,803 100% 28.75MB/s 0:00:00 (xfr#41, to-chk=29/78)
|
|
|
|
|
|
|
| 82 |
32,768 9% 2.40MB/s 0:00:00
|
| 83 |
361,803 100% 24.65MB/s 0:00:00 (xfr#42, to-chk=28/78)
|
|
|
|
|
|
|
| 84 |
32,768 9% 2.08MB/s 0:00:00
|
| 85 |
361,813 100% 23.00MB/s 0:00:00 (xfr#43, to-chk=27/78)
|
|
|
|
|
|
|
| 86 |
32,768 9% 1.84MB/s 0:00:00
|
| 87 |
361,803 100% 20.30MB/s 0:00:00 (xfr#44, to-chk=26/78)
|
|
|
|
|
|
|
| 88 |
32,768 9% 1.74MB/s 0:00:00
|
| 89 |
361,803 100% 18.16MB/s 0:00:00 (xfr#45, to-chk=25/78)
|
|
|
|
|
|
|
| 90 |
32,768 9% 1.56MB/s 0:00:00
|
| 91 |
361,803 100% 16.43MB/s 0:00:00 (xfr#46, to-chk=24/78)
|
|
|
|
|
|
|
| 92 |
32,768 9% 1.42MB/s 0:00:00
|
| 93 |
358,453 100% 14.86MB/s 0:00:00 (xfr#47, to-chk=23/78)
|
|
|
|
|
|
|
| 94 |
32,768 9% 1.30MB/s 0:00:00
|
| 95 |
358,866 100% 14.26MB/s 0:00:00 (xfr#48, to-chk=22/78)
|
|
|
|
|
|
|
| 96 |
32,768 9% 1.20MB/s 0:00:00
|
| 97 |
359,278 100% 13.18MB/s 0:00:00 (xfr#49, to-chk=21/78)
|
|
|
|
|
|
|
| 98 |
32,768 9% 1.12MB/s 0:00:00
|
| 99 |
359,694 100% 11.83MB/s 0:00:00 (xfr#50, to-chk=20/78)
|
|
|
|
|
|
|
| 100 |
32,768 9% 1.04MB/s 0:00:00
|
| 101 |
360,106 100% 11.08MB/s 0:00:00 (xfr#51, to-chk=19/78)
|
|
|
|
|
|
|
| 102 |
32,768 9% 969.70kB/s 0:00:00
|
| 103 |
360,518 100% 10.11MB/s 0:00:00 (xfr#52, to-chk=18/78)
|
|
|
|
|
|
|
| 104 |
32,768 9% 888.89kB/s 0:00:00
|
| 105 |
360,934 100% 9.30MB/s 0:00:00 (xfr#53, to-chk=17/78)
|
|
|
|
|
|
|
| 106 |
32,768 9% 842.11kB/s 0:00:00
|
| 107 |
361,356 100% 8.84MB/s 0:00:00 (xfr#54, to-chk=16/78)
|
|
|
|
|
|
|
|
|
|
| 108 |
39 100% 0.98kB/s 0:00:00
|
| 109 |
39 100% 0.98kB/s 0:00:00 (xfr#55, to-chk=15/78)
|
|
|
|
|
|
|
| 110 |
16,089 100% 402.87kB/s 0:00:00
|
| 111 |
16,089 100% 402.87kB/s 0:00:00 (xfr#56, to-chk=14/78)
|
|
|
|
|
|
|
|
|
|
| 112 |
62 100% 1.55kB/s 0:00:00
|
| 113 |
62 100% 1.55kB/s 0:00:00 (xfr#57, to-chk=13/78)
|
|
|
|
|
|
|
| 114 |
5,304 100% 132.81kB/s 0:00:00
|
| 115 |
5,304 100% 132.81kB/s 0:00:00 (xfr#58, to-chk=12/78)
|
|
|
|
|
|
|
| 116 |
10,099 100% 252.88kB/s 0:00:00
|
| 117 |
10,099 100% 252.88kB/s 0:00:00 (xfr#59, to-chk=11/78)
|
|
|
|
|
|
|
|
|
|
| 118 |
41 100% 1.03kB/s 0:00:00
|
| 119 |
41 100% 1.03kB/s 0:00:00 (xfr#60, to-chk=10/78)
|
|
|
|
|
|
|
| 120 |
5,742 100% 143.78kB/s 0:00:00
|
| 121 |
5,742 100% 143.78kB/s 0:00:00 (xfr#61, to-chk=9/78)
|
|
|
|
|
|
|
| 122 |
5,570 100% 139.47kB/s 0:00:00
|
| 123 |
5,570 100% 139.47kB/s 0:00:00 (xfr#62, to-chk=8/78)
|
|
|
|
|
|
|
|
|
|
| 124 |
44 100% 1.10kB/s 0:00:00
|
| 125 |
44 100% 1.10kB/s 0:00:00 (xfr#63, to-chk=7/78)
|
|
|
|
|
|
|
| 126 |
3,168 100% 79.33kB/s 0:00:00
|
| 127 |
3,168 100% 79.33kB/s 0:00:00 (xfr#64, to-chk=6/78)
|
|
|
|
|
|
|
| 128 |
2,690 100% 67.36kB/s 0:00:00
|
| 129 |
2,690 100% 67.36kB/s 0:00:00 (xfr#65, to-chk=5/78)
|
|
|
|
|
|
|
| 130 |
4,141 100% 103.69kB/s 0:00:00
|
| 131 |
4,141 100% 103.69kB/s 0:00:00 (xfr#66, to-chk=4/78)
|
|
|
|
|
|
|
|
|
|
| 132 |
39 100% 0.98kB/s 0:00:00
|
| 133 |
39 100% 0.98kB/s 0:00:00 (xfr#67, to-chk=3/78)
|
|
|
|
|
|
|
| 134 |
6,619 100% 165.74kB/s 0:00:00
|
| 135 |
6,619 100% 165.74kB/s 0:00:00 (xfr#68, to-chk=2/78)
|
|
|
|
|
|
|
| 136 |
4,442 100% 111.23kB/s 0:00:00
|
| 137 |
4,442 100% 111.23kB/s 0:00:00 (xfr#69, to-chk=1/78)
|
|
|
|
|
|
|
|
|
|
| 138 |
38 100% 0.95kB/s 0:00:00
|
| 139 |
38 100% 0.95kB/s 0:00:00 (xfr#70, to-chk=0/78)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Cloning Space...
|
| 2 |
+
Cloning into 'hf_deploy'...
|
| 3 |
+
Copying files...
|
| 4 |
+
sending incremental file list
|
| 5 |
+
.gitignore
|
| 6 |
+
|
| 7 |
536 100% 0.00kB/s 0:00:00
|
| 8 |
536 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=76/78)
|
| 9 |
+
Dockerfile
|
| 10 |
+
|
| 11 |
622 100% 607.42kB/s 0:00:00
|
| 12 |
622 100% 607.42kB/s 0:00:00 (xfr#2, to-chk=75/78)
|
| 13 |
+
README.md
|
| 14 |
+
|
| 15 |
924 100% 902.34kB/s 0:00:00
|
| 16 |
924 100% 902.34kB/s 0:00:00 (xfr#3, to-chk=74/78)
|
| 17 |
+
README_HF.md
|
| 18 |
+
|
| 19 |
3,598 100% 3.43MB/s 0:00:00
|
| 20 |
3,598 100% 3.43MB/s 0:00:00 (xfr#4, to-chk=73/78)
|
| 21 |
+
debug_imports.py
|
| 22 |
+
|
| 23 |
620 100% 605.47kB/s 0:00:00
|
| 24 |
620 100% 605.47kB/s 0:00:00 (xfr#5, to-chk=72/78)
|
| 25 |
+
demo.gif
|
| 26 |
+
|
| 27 |
32,768 4% 31.25MB/s 0:00:00
|
| 28 |
692,490 100% 330.20MB/s 0:00:00 (xfr#6, to-chk=71/78)
|
| 29 |
+
demo_3d.py
|
| 30 |
+
|
| 31 |
2,198 100% 715.49kB/s 0:00:00
|
| 32 |
2,198 100% 715.49kB/s 0:00:00 (xfr#7, to-chk=70/78)
|
| 33 |
+
demo_interactive.py
|
| 34 |
+
|
| 35 |
4,050 100% 1.29MB/s 0:00:00
|
| 36 |
4,050 100% 1.29MB/s 0:00:00 (xfr#8, to-chk=69/78)
|
| 37 |
+
demo_log.txt
|
| 38 |
+
|
| 39 |
2,580 100% 839.84kB/s 0:00:00
|
| 40 |
2,580 100% 839.84kB/s 0:00:00 (xfr#9, to-chk=68/78)
|
| 41 |
+
demo_log_2.txt
|
| 42 |
+
|
| 43 |
2,652 100% 863.28kB/s 0:00:00
|
| 44 |
2,652 100% 863.28kB/s 0:00:00 (xfr#10, to-chk=67/78)
|
| 45 |
+
demo_log_3.txt
|
| 46 |
+
|
| 47 |
5,276 100% 1.68MB/s 0:00:00
|
| 48 |
5,276 100% 1.68MB/s 0:00:00 (xfr#11, to-chk=66/78)
|
| 49 |
+
demo_log_4.txt
|
| 50 |
+
|
| 51 |
5,276 100% 1.68MB/s 0:00:00
|
| 52 |
5,276 100% 1.68MB/s 0:00:00 (xfr#12, to-chk=65/78)
|
| 53 |
+
demo_log_5.txt
|
| 54 |
+
|
| 55 |
295 100% 96.03kB/s 0:00:00
|
| 56 |
295 100% 96.03kB/s 0:00:00 (xfr#13, to-chk=64/78)
|
| 57 |
+
deploy_log.txt
|
| 58 |
+
|
| 59 |
1,740 100% 566.41kB/s 0:00:00
|
| 60 |
1,740 100% 566.41kB/s 0:00:00 (xfr#14, to-chk=63/78)
|
| 61 |
+
install_log.txt
|
| 62 |
+
|
| 63 |
14,311 100% 4.55MB/s 0:00:00
|
| 64 |
14,311 100% 4.55MB/s 0:00:00 (xfr#15, to-chk=62/78)
|
| 65 |
+
interactive_log.txt
|
| 66 |
+
|
| 67 |
236 100% 76.82kB/s 0:00:00
|
| 68 |
236 100% 76.82kB/s 0:00:00 (xfr#16, to-chk=61/78)
|
| 69 |
+
main.py
|
| 70 |
+
|
| 71 |
4,348 100% 1.38MB/s 0:00:00
|
| 72 |
4,348 100% 1.38MB/s 0:00:00 (xfr#17, to-chk=60/78)
|
| 73 |
+
output.txt
|
| 74 |
+
|
| 75 |
281 100% 91.47kB/s 0:00:00
|
| 76 |
281 100% 91.47kB/s 0:00:00 (xfr#18, to-chk=59/78)
|
| 77 |
+
pip_log.txt
|
| 78 |
+
|
| 79 |
5,014 100% 1.59MB/s 0:00:00
|
| 80 |
5,014 100% 1.59MB/s 0:00:00 (xfr#19, to-chk=58/78)
|
| 81 |
+
pybullet_bin_log.txt
|
| 82 |
+
|
| 83 |
223 100% 72.59kB/s 0:00:00
|
| 84 |
223 100% 72.59kB/s 0:00:00 (xfr#20, to-chk=57/78)
|
| 85 |
+
pybullet_log.txt
|
| 86 |
+
|
| 87 |
32,768 25% 10.42MB/s 0:00:00
|
| 88 |
129,509 100% 30.88MB/s 0:00:00 (xfr#21, to-chk=56/78)
|
| 89 |
+
pygame_log.txt
|
| 90 |
+
|
| 91 |
219 100% 53.47kB/s 0:00:00
|
| 92 |
219 100% 53.47kB/s 0:00:00 (xfr#22, to-chk=55/78)
|
| 93 |
+
requirements.txt
|
| 94 |
+
|
| 95 |
194 100% 47.36kB/s 0:00:00
|
| 96 |
194 100% 47.36kB/s 0:00:00 (xfr#23, to-chk=54/78)
|
| 97 |
+
run_demo.sh
|
| 98 |
+
|
| 99 |
178 100% 43.46kB/s 0:00:00
|
| 100 |
178 100% 43.46kB/s 0:00:00 (xfr#24, to-chk=53/78)
|
| 101 |
+
setup.sh
|
| 102 |
+
|
| 103 |
978 100% 238.77kB/s 0:00:00
|
| 104 |
978 100% 238.77kB/s 0:00:00 (xfr#25, to-chk=52/78)
|
| 105 |
+
test_log.txt
|
| 106 |
+
|
| 107 |
964 100% 235.35kB/s 0:00:00
|
| 108 |
964 100% 235.35kB/s 0:00:00 (xfr#26, to-chk=51/78)
|
| 109 |
+
test_physics.py
|
| 110 |
+
|
| 111 |
1,311 100% 320.07kB/s 0:00:00
|
| 112 |
1,311 100% 320.07kB/s 0:00:00 (xfr#27, to-chk=50/78)
|
| 113 |
+
test_random_1.txt
|
| 114 |
+
|
| 115 |
318 100% 77.64kB/s 0:00:00
|
| 116 |
318 100% 77.64kB/s 0:00:00 (xfr#28, to-chk=49/78)
|
| 117 |
+
test_random_2.txt
|
| 118 |
+
|
| 119 |
315 100% 76.90kB/s 0:00:00
|
| 120 |
315 100% 76.90kB/s 0:00:00 (xfr#29, to-chk=48/78)
|
| 121 |
+
test_result_final.txt
|
| 122 |
+
|
| 123 |
263 100% 64.21kB/s 0:00:00
|
| 124 |
263 100% 64.21kB/s 0:00:00 (xfr#30, to-chk=47/78)
|
| 125 |
+
test_success.txt
|
| 126 |
+
|
| 127 |
263 100% 64.21kB/s 0:00:00
|
| 128 |
263 100% 64.21kB/s 0:00:00 (xfr#31, to-chk=46/78)
|
| 129 |
+
train.py
|
| 130 |
+
|
| 131 |
1,205 100% 294.19kB/s 0:00:00
|
| 132 |
1,205 100% 294.19kB/s 0:00:00 (xfr#32, to-chk=45/78)
|
| 133 |
+
train_hf.py
|
| 134 |
+
|
| 135 |
2,277 100% 555.91kB/s 0:00:00
|
| 136 |
2,277 100% 555.91kB/s 0:00:00 (xfr#33, to-chk=44/78)
|
| 137 |
+
train_log_500k.txt
|
| 138 |
+
|
| 139 |
32,768 39% 7.81MB/s 0:00:00
|
| 140 |
83,713 100% 19.96MB/s 0:00:00 (xfr#34, to-chk=43/78)
|
| 141 |
+
train_log_full.txt
|
| 142 |
+
|
| 143 |
3,044 100% 743.16kB/s 0:00:00
|
| 144 |
3,044 100% 743.16kB/s 0:00:00 (xfr#35, to-chk=42/78)
|
| 145 |
+
train_log_retry.txt
|
| 146 |
+
|
| 147 |
14,709 100% 3.51MB/s 0:00:00
|
| 148 |
14,709 100% 3.51MB/s 0:00:00 (xfr#36, to-chk=41/78)
|
| 149 |
+
checkpoints/
|
| 150 |
+
checkpoints/liquid_ppo_drone_100000_steps.zip
|
| 151 |
+
|
| 152 |
32,768 9% 6.25MB/s 0:00:00
|
| 153 |
361,759 100% 57.50MB/s 0:00:00 (xfr#37, to-chk=33/78)
|
| 154 |
+
checkpoints/liquid_ppo_drone_10000_steps.zip
|
| 155 |
+
|
| 156 |
32,768 9% 4.46MB/s 0:00:00
|
| 157 |
358,037 100% 48.78MB/s 0:00:00 (xfr#38, to-chk=32/78)
|
| 158 |
+
checkpoints/liquid_ppo_drone_110000_steps.zip
|
| 159 |
+
|
| 160 |
32,768 9% 3.91MB/s 0:00:00
|
| 161 |
361,803 100% 38.34MB/s 0:00:00 (xfr#39, to-chk=31/78)
|
| 162 |
+
checkpoints/liquid_ppo_drone_120000_steps.zip
|
| 163 |
+
|
| 164 |
32,768 9% 3.12MB/s 0:00:00
|
| 165 |
361,803 100% 34.50MB/s 0:00:00 (xfr#40, to-chk=30/78)
|
| 166 |
+
checkpoints/liquid_ppo_drone_130000_steps.zip
|
| 167 |
+
|
| 168 |
32,768 9% 2.60MB/s 0:00:00
|
| 169 |
361,803 100% 28.75MB/s 0:00:00 (xfr#41, to-chk=29/78)
|
| 170 |
+
checkpoints/liquid_ppo_drone_140000_steps.zip
|
| 171 |
+
|
| 172 |
32,768 9% 2.40MB/s 0:00:00
|
| 173 |
361,803 100% 24.65MB/s 0:00:00 (xfr#42, to-chk=28/78)
|
| 174 |
+
checkpoints/liquid_ppo_drone_150000_steps.zip
|
| 175 |
+
|
| 176 |
32,768 9% 2.08MB/s 0:00:00
|
| 177 |
361,813 100% 23.00MB/s 0:00:00 (xfr#43, to-chk=27/78)
|
| 178 |
+
checkpoints/liquid_ppo_drone_160000_steps.zip
|
| 179 |
+
|
| 180 |
32,768 9% 1.84MB/s 0:00:00
|
| 181 |
361,803 100% 20.30MB/s 0:00:00 (xfr#44, to-chk=26/78)
|
| 182 |
+
checkpoints/liquid_ppo_drone_170000_steps.zip
|
| 183 |
+
|
| 184 |
32,768 9% 1.74MB/s 0:00:00
|
| 185 |
361,803 100% 18.16MB/s 0:00:00 (xfr#45, to-chk=25/78)
|
| 186 |
+
checkpoints/liquid_ppo_drone_180000_steps.zip
|
| 187 |
+
|
| 188 |
32,768 9% 1.56MB/s 0:00:00
|
| 189 |
361,803 100% 16.43MB/s 0:00:00 (xfr#46, to-chk=24/78)
|
| 190 |
+
checkpoints/liquid_ppo_drone_20000_steps.zip
|
| 191 |
+
|
| 192 |
32,768 9% 1.42MB/s 0:00:00
|
| 193 |
358,453 100% 14.86MB/s 0:00:00 (xfr#47, to-chk=23/78)
|
| 194 |
+
checkpoints/liquid_ppo_drone_30000_steps.zip
|
| 195 |
+
|
| 196 |
32,768 9% 1.30MB/s 0:00:00
|
| 197 |
358,866 100% 14.26MB/s 0:00:00 (xfr#48, to-chk=22/78)
|
| 198 |
+
checkpoints/liquid_ppo_drone_40000_steps.zip
|
| 199 |
+
|
| 200 |
32,768 9% 1.20MB/s 0:00:00
|
| 201 |
359,278 100% 13.18MB/s 0:00:00 (xfr#49, to-chk=21/78)
|
| 202 |
+
checkpoints/liquid_ppo_drone_50000_steps.zip
|
| 203 |
+
|
| 204 |
32,768 9% 1.12MB/s 0:00:00
|
| 205 |
359,694 100% 11.83MB/s 0:00:00 (xfr#50, to-chk=20/78)
|
| 206 |
+
checkpoints/liquid_ppo_drone_60000_steps.zip
|
| 207 |
+
|
| 208 |
32,768 9% 1.04MB/s 0:00:00
|
| 209 |
360,106 100% 11.08MB/s 0:00:00 (xfr#51, to-chk=19/78)
|
| 210 |
+
checkpoints/liquid_ppo_drone_70000_steps.zip
|
| 211 |
+
|
| 212 |
32,768 9% 969.70kB/s 0:00:00
|
| 213 |
360,518 100% 10.11MB/s 0:00:00 (xfr#52, to-chk=18/78)
|
| 214 |
+
checkpoints/liquid_ppo_drone_80000_steps.zip
|
| 215 |
+
|
| 216 |
32,768 9% 888.89kB/s 0:00:00
|
| 217 |
360,934 100% 9.30MB/s 0:00:00 (xfr#53, to-chk=17/78)
|
| 218 |
+
checkpoints/liquid_ppo_drone_90000_steps.zip
|
| 219 |
+
|
| 220 |
32,768 9% 842.11kB/s 0:00:00
|
| 221 |
361,356 100% 8.84MB/s 0:00:00 (xfr#54, to-chk=16/78)
|
| 222 |
+
demo/
|
| 223 |
+
demo/__init__.py
|
| 224 |
+
|
| 225 |
39 100% 0.98kB/s 0:00:00
|
| 226 |
39 100% 0.98kB/s 0:00:00 (xfr#55, to-chk=15/78)
|
| 227 |
+
demo/visualize_drone.py
|
| 228 |
+
|
| 229 |
16,089 100% 402.87kB/s 0:00:00
|
| 230 |
16,089 100% 402.87kB/s 0:00:00 (xfr#56, to-chk=14/78)
|
| 231 |
+
env/
|
| 232 |
+
env/__init__.py
|
| 233 |
+
|
| 234 |
62 100% 1.55kB/s 0:00:00
|
| 235 |
62 100% 1.55kB/s 0:00:00 (xfr#57, to-chk=13/78)
|
| 236 |
+
env/drone_3d.py
|
| 237 |
+
|
| 238 |
5,304 100% 132.81kB/s 0:00:00
|
| 239 |
5,304 100% 132.81kB/s 0:00:00 (xfr#58, to-chk=12/78)
|
| 240 |
+
env/drone_env.py
|
| 241 |
+
|
| 242 |
10,099 100% 252.88kB/s 0:00:00
|
| 243 |
10,099 100% 252.88kB/s 0:00:00 (xfr#59, to-chk=11/78)
|
| 244 |
+
eval/
|
| 245 |
+
eval/__init__.py
|
| 246 |
+
|
| 247 |
41 100% 1.03kB/s 0:00:00
|
| 248 |
41 100% 1.03kB/s 0:00:00 (xfr#60, to-chk=10/78)
|
| 249 |
+
eval/eval_liquid_policy.py
|
| 250 |
+
|
| 251 |
5,742 100% 143.78kB/s 0:00:00
|
| 252 |
5,742 100% 143.78kB/s 0:00:00 (xfr#61, to-chk=9/78)
|
| 253 |
+
eval/eval_mlp_baseline.py
|
| 254 |
+
|
| 255 |
5,570 100% 139.47kB/s 0:00:00
|
| 256 |
5,570 100% 139.47kB/s 0:00:00 (xfr#62, to-chk=8/78)
|
| 257 |
+
models/
|
| 258 |
+
models/__init__.py
|
| 259 |
+
|
| 260 |
44 100% 1.10kB/s 0:00:00
|
| 261 |
44 100% 1.10kB/s 0:00:00 (xfr#63, to-chk=7/78)
|
| 262 |
+
models/liquid_cell.py
|
| 263 |
+
|
| 264 |
3,168 100% 79.33kB/s 0:00:00
|
| 265 |
3,168 100% 79.33kB/s 0:00:00 (xfr#64, to-chk=6/78)
|
| 266 |
+
models/liquid_policy.py
|
| 267 |
+
|
| 268 |
2,690 100% 67.36kB/s 0:00:00
|
| 269 |
2,690 100% 67.36kB/s 0:00:00 (xfr#65, to-chk=5/78)
|
| 270 |
+
models/liquid_ppo.py
|
| 271 |
+
|
| 272 |
4,141 100% 103.69kB/s 0:00:00
|
| 273 |
4,141 100% 103.69kB/s 0:00:00 (xfr#66, to-chk=4/78)
|
| 274 |
+
train/
|
| 275 |
+
train/__init__.py
|
| 276 |
+
|
| 277 |
39 100% 0.98kB/s 0:00:00
|
| 278 |
39 100% 0.98kB/s 0:00:00 (xfr#67, to-chk=3/78)
|
| 279 |
+
train/train_liquid_ppo.py
|
| 280 |
+
|
| 281 |
6,619 100% 165.74kB/s 0:00:00
|
| 282 |
6,619 100% 165.74kB/s 0:00:00 (xfr#68, to-chk=2/78)
|
| 283 |
+
train/train_mlp_ppo.py
|
| 284 |
+
|
| 285 |
4,442 100% 111.23kB/s 0:00:00
|
| 286 |
4,442 100% 111.23kB/s 0:00:00 (xfr#69, to-chk=1/78)
|
| 287 |
+
utils/
|
| 288 |
+
utils/__init__.py
|
| 289 |
+
|
| 290 |
38 100% 0.95kB/s 0:00:00
|
| 291 |
38 100% 0.95kB/s 0:00:00 (xfr#70, to-chk=0/78)
|
| 292 |
+
|
| 293 |
+
sent 7,551,292 bytes received 1,390 bytes 15,105,364.00 bytes/sec
|
| 294 |
+
total size is 7,542,644 speedup is 1.00
|
| 295 |
+
[main 8ef522c] Deploy Neuro-Flyt 3D Training
|
| 296 |
+
70 files changed, 9247 insertions(+), 10 deletions(-)
|
| 297 |
+
create mode 100644 .gitignore
|
| 298 |
+
create mode 100644 Dockerfile
|
| 299 |
+
create mode 100644 README_HF.md
|
| 300 |
+
create mode 100644 checkpoints/liquid_ppo_drone_100000_steps.zip
|
| 301 |
+
create mode 100644 checkpoints/liquid_ppo_drone_10000_steps.zip
|
| 302 |
+
create mode 100644 checkpoints/liquid_ppo_drone_110000_steps.zip
|
| 303 |
+
create mode 100644 checkpoints/liquid_ppo_drone_120000_steps.zip
|
| 304 |
+
create mode 100644 checkpoints/liquid_ppo_drone_130000_steps.zip
|
| 305 |
+
create mode 100644 checkpoints/liquid_ppo_drone_140000_steps.zip
|
| 306 |
+
create mode 100644 checkpoints/liquid_ppo_drone_150000_steps.zip
|
| 307 |
+
create mode 100644 checkpoints/liquid_ppo_drone_160000_steps.zip
|
| 308 |
+
create mode 100644 checkpoints/liquid_ppo_drone_170000_steps.zip
|
| 309 |
+
create mode 100644 checkpoints/liquid_ppo_drone_180000_steps.zip
|
| 310 |
+
create mode 100644 checkpoints/liquid_ppo_drone_20000_steps.zip
|
| 311 |
+
create mode 100644 checkpoints/liquid_ppo_drone_30000_steps.zip
|
| 312 |
+
create mode 100644 checkpoints/liquid_ppo_drone_40000_steps.zip
|
| 313 |
+
create mode 100644 checkpoints/liquid_ppo_drone_50000_steps.zip
|
| 314 |
+
create mode 100644 checkpoints/liquid_ppo_drone_60000_steps.zip
|
| 315 |
+
create mode 100644 checkpoints/liquid_ppo_drone_70000_steps.zip
|
| 316 |
+
create mode 100644 checkpoints/liquid_ppo_drone_80000_steps.zip
|
| 317 |
+
create mode 100644 checkpoints/liquid_ppo_drone_90000_steps.zip
|
| 318 |
+
create mode 100644 debug_imports.py
|
| 319 |
+
create mode 100644 demo.gif
|
| 320 |
+
create mode 100644 demo/__init__.py
|
| 321 |
+
create mode 100644 demo/visualize_drone.py
|
| 322 |
+
create mode 100644 demo_3d.py
|
| 323 |
+
create mode 100644 demo_interactive.py
|
| 324 |
+
create mode 100644 demo_log.txt
|
| 325 |
+
create mode 100644 demo_log_2.txt
|
| 326 |
+
create mode 100644 demo_log_3.txt
|
| 327 |
+
create mode 100644 demo_log_4.txt
|
| 328 |
+
create mode 100644 demo_log_5.txt
|
| 329 |
+
create mode 100644 deploy_log.txt
|
| 330 |
+
create mode 100644 env/__init__.py
|
| 331 |
+
create mode 100644 env/drone_3d.py
|
| 332 |
+
create mode 100644 env/drone_env.py
|
| 333 |
+
create mode 100644 eval/__init__.py
|
| 334 |
+
create mode 100644 eval/eval_liquid_policy.py
|
| 335 |
+
create mode 100644 eval/eval_mlp_baseline.py
|
| 336 |
+
create mode 100644 install_log.txt
|
| 337 |
+
create mode 100644 interactive_log.txt
|
| 338 |
+
create mode 100644 main.py
|
| 339 |
+
create mode 100644 models/__init__.py
|
| 340 |
+
create mode 100644 models/liquid_cell.py
|
| 341 |
+
create mode 100644 models/liquid_policy.py
|
| 342 |
+
create mode 100644 models/liquid_ppo.py
|
| 343 |
+
create mode 100644 output.txt
|
| 344 |
+
create mode 100644 pip_log.txt
|
| 345 |
+
create mode 100644 pybullet_bin_log.txt
|
| 346 |
+
create mode 100644 pybullet_log.txt
|
| 347 |
+
create mode 100644 pygame_log.txt
|
| 348 |
+
create mode 100644 requirements.txt
|
| 349 |
+
create mode 100755 run_demo.sh
|
| 350 |
+
create mode 100755 setup.sh
|
| 351 |
+
create mode 100644 test_log.txt
|
| 352 |
+
create mode 100644 test_physics.py
|
| 353 |
+
create mode 100644 test_random_1.txt
|
| 354 |
+
create mode 100644 test_random_2.txt
|
| 355 |
+
create mode 100644 test_result_final.txt
|
| 356 |
+
create mode 100644 test_success.txt
|
| 357 |
+
create mode 100644 train.py
|
| 358 |
+
create mode 100644 train/__init__.py
|
| 359 |
+
create mode 100644 train/train_liquid_ppo.py
|
| 360 |
+
create mode 100644 train/train_mlp_ppo.py
|
| 361 |
+
create mode 100644 train_hf.py
|
| 362 |
+
create mode 100644 train_log_500k.txt
|
| 363 |
+
create mode 100644 train_log_full.txt
|
| 364 |
+
create mode 100644 train_log_retry.txt
|
| 365 |
+
create mode 100644 utils/__init__.py
|
| 366 |
+
remote: [31m-------------------------------------------------------------------------[0m
|
| 367 |
+
remote: [31mYour push was rejected because it contains binary files.[0m
|
| 368 |
+
remote: [31mPlease use https://huggingface.co/docs/hub/xet to store binary files.[0m
|
| 369 |
+
remote: [31mSee also: https://huggingface.co/docs/hub/xet/using-xet-storage#git[0m
|
| 370 |
+
remote: [31m[0m
|
| 371 |
+
remote: [31mOffending files:[0m
|
| 372 |
+
remote: [31m - checkpoints/liquid_ppo_drone_100000_steps.zip (ref: refs/heads/main)[0m
|
| 373 |
+
remote: [31m - checkpoints/liquid_ppo_drone_10000_steps.zip (ref: refs/heads/main)[0m
|
| 374 |
+
remote: [31m - checkpoints/liquid_ppo_drone_110000_steps.zip (ref: refs/heads/main)[0m
|
| 375 |
+
remote: [31m - checkpoints/liquid_ppo_drone_120000_steps.zip (ref: refs/heads/main)[0m
|
| 376 |
+
remote: [31m - checkpoints/liquid_ppo_drone_130000_steps.zip (ref: refs/heads/main)[0m
|
| 377 |
+
remote: [31m - checkpoints/liquid_ppo_drone_140000_steps.zip (ref: refs/heads/main)[0m
|
| 378 |
+
remote: [31m - checkpoints/liquid_ppo_drone_150000_steps.zip (ref: refs/heads/main)[0m
|
| 379 |
+
remote: [31m - checkpoints/liquid_ppo_drone_160000_steps.zip (ref: refs/heads/main)[0m
|
| 380 |
+
remote: [31m - checkpoints/liquid_ppo_drone_170000_steps.zip (ref: refs/heads/main)[0m
|
| 381 |
+
remote: [31m - checkpoints/liquid_ppo_drone_180000_steps.zip (ref: refs/heads/main)[0m
|
| 382 |
+
remote: [31m - checkpoints/liquid_ppo_drone_20000_steps.zip (ref: refs/heads/main)[0m
|
| 383 |
+
remote: [31m - checkpoints/liquid_ppo_drone_30000_steps.zip (ref: refs/heads/main)[0m
|
| 384 |
+
remote: [31m - checkpoints/liquid_ppo_drone_40000_steps.zip (ref: refs/heads/main)[0m
|
| 385 |
+
remote: [31m - checkpoints/liquid_ppo_drone_50000_steps.zip (ref: refs/heads/main)[0m
|
| 386 |
+
remote: [31m - checkpoints/liquid_ppo_drone_60000_steps.zip (ref: refs/heads/main)[0m
|
| 387 |
+
remote: [31m - checkpoints/liquid_ppo_drone_70000_steps.zip (ref: refs/heads/main)[0m
|
| 388 |
+
remote: [31m - checkpoints/liquid_ppo_drone_80000_steps.zip (ref: refs/heads/main)[0m
|
| 389 |
+
remote: [31m - checkpoints/liquid_ppo_drone_90000_steps.zip (ref: refs/heads/main)[0m
|
| 390 |
+
remote: [31m - demo.gif (ref: refs/heads/main)[0m
|
| 391 |
+
remote: [31m-------------------------------------------------------------------------[0m
|
| 392 |
+
To https://huggingface.co/spaces/iteratehack/neuro-flyt-training
|
| 393 |
+
! [remote rejected] main -> main (pre-receive hook declined)
|
| 394 |
+
error: failed to push some refs to 'https://huggingface.co/spaces/iteratehack/neuro-flyt-training'
|
deploy_log_2.txt
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 0 |
536 100% 0.00kB/s 0:00:00
|
| 1 |
536 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=57/59)
|
|
|
|
|
|
|
| 2 |
622 100% 607.42kB/s 0:00:00
|
| 3 |
622 100% 607.42kB/s 0:00:00 (xfr#2, to-chk=56/59)
|
|
|
|
|
|
|
| 4 |
924 100% 902.34kB/s 0:00:00
|
| 5 |
924 100% 902.34kB/s 0:00:00 (xfr#3, to-chk=55/59)
|
|
|
|
|
|
|
| 6 |
3,598 100% 3.43MB/s 0:00:00
|
| 7 |
3,598 100% 3.43MB/s 0:00:00 (xfr#4, to-chk=54/59)
|
|
|
|
|
|
|
| 8 |
620 100% 605.47kB/s 0:00:00
|
| 9 |
620 100% 605.47kB/s 0:00:00 (xfr#5, to-chk=53/59)
|
|
|
|
|
|
|
| 10 |
2,198 100% 2.10MB/s 0:00:00
|
| 11 |
2,198 100% 2.10MB/s 0:00:00 (xfr#6, to-chk=52/59)
|
|
|
|
|
|
|
| 12 |
4,050 100% 3.86MB/s 0:00:00
|
| 13 |
4,050 100% 3.86MB/s 0:00:00 (xfr#7, to-chk=51/59)
|
|
|
|
|
|
|
| 14 |
2,580 100% 2.46MB/s 0:00:00
|
| 15 |
2,580 100% 2.46MB/s 0:00:00 (xfr#8, to-chk=50/59)
|
|
|
|
|
|
|
| 16 |
2,652 100% 2.53MB/s 0:00:00
|
| 17 |
2,652 100% 2.53MB/s 0:00:00 (xfr#9, to-chk=49/59)
|
|
|
|
|
|
|
| 18 |
5,276 100% 5.03MB/s 0:00:00
|
| 19 |
5,276 100% 5.03MB/s 0:00:00 (xfr#10, to-chk=48/59)
|
|
|
|
|
|
|
| 20 |
5,276 100% 5.03MB/s 0:00:00
|
| 21 |
5,276 100% 5.03MB/s 0:00:00 (xfr#11, to-chk=47/59)
|
|
|
|
|
|
|
| 22 |
295 100% 288.09kB/s 0:00:00
|
| 23 |
295 100% 288.09kB/s 0:00:00 (xfr#12, to-chk=46/59)
|
|
|
|
|
|
|
| 24 |
15,625 100% 14.90MB/s 0:00:00
|
| 25 |
15,625 100% 14.90MB/s 0:00:00 (xfr#13, to-chk=45/59)
|
|
|
|
| 1 |
+
Cloning Space...
|
| 2 |
+
Cloning into 'hf_deploy'...
|
| 3 |
+
Copying files...
|
| 4 |
+
sending incremental file list
|
| 5 |
+
.gitignore
|
| 6 |
+
|
| 7 |
536 100% 0.00kB/s 0:00:00
|
| 8 |
536 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=57/59)
|
| 9 |
+
Dockerfile
|
| 10 |
+
|
| 11 |
622 100% 607.42kB/s 0:00:00
|
| 12 |
622 100% 607.42kB/s 0:00:00 (xfr#2, to-chk=56/59)
|
| 13 |
+
README.md
|
| 14 |
+
|
| 15 |
924 100% 902.34kB/s 0:00:00
|
| 16 |
924 100% 902.34kB/s 0:00:00 (xfr#3, to-chk=55/59)
|
| 17 |
+
README_HF.md
|
| 18 |
+
|
| 19 |
3,598 100% 3.43MB/s 0:00:00
|
| 20 |
3,598 100% 3.43MB/s 0:00:00 (xfr#4, to-chk=54/59)
|
| 21 |
+
debug_imports.py
|
| 22 |
+
|
| 23 |
620 100% 605.47kB/s 0:00:00
|
| 24 |
620 100% 605.47kB/s 0:00:00 (xfr#5, to-chk=53/59)
|
| 25 |
+
demo_3d.py
|
| 26 |
+
|
| 27 |
2,198 100% 2.10MB/s 0:00:00
|
| 28 |
2,198 100% 2.10MB/s 0:00:00 (xfr#6, to-chk=52/59)
|
| 29 |
+
demo_interactive.py
|
| 30 |
+
|
| 31 |
4,050 100% 3.86MB/s 0:00:00
|
| 32 |
4,050 100% 3.86MB/s 0:00:00 (xfr#7, to-chk=51/59)
|
| 33 |
+
demo_log.txt
|
| 34 |
+
|
| 35 |
2,580 100% 2.46MB/s 0:00:00
|
| 36 |
2,580 100% 2.46MB/s 0:00:00 (xfr#8, to-chk=50/59)
|
| 37 |
+
demo_log_2.txt
|
| 38 |
+
|
| 39 |
2,652 100% 2.53MB/s 0:00:00
|
| 40 |
2,652 100% 2.53MB/s 0:00:00 (xfr#9, to-chk=49/59)
|
| 41 |
+
demo_log_3.txt
|
| 42 |
+
|
| 43 |
5,276 100% 5.03MB/s 0:00:00
|
| 44 |
5,276 100% 5.03MB/s 0:00:00 (xfr#10, to-chk=48/59)
|
| 45 |
+
demo_log_4.txt
|
| 46 |
+
|
| 47 |
5,276 100% 5.03MB/s 0:00:00
|
| 48 |
5,276 100% 5.03MB/s 0:00:00 (xfr#11, to-chk=47/59)
|
| 49 |
+
demo_log_5.txt
|
| 50 |
+
|
| 51 |
295 100% 288.09kB/s 0:00:00
|
| 52 |
295 100% 288.09kB/s 0:00:00 (xfr#12, to-chk=46/59)
|
| 53 |
+
deploy_log.txt
|
| 54 |
+
|
| 55 |
15,625 100% 14.90MB/s 0:00:00
|
| 56 |
15,625 100% 14.90MB/s 0:00:00 (xfr#13, to-chk=45/59)
|
env/__init__.py
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Environment modules for custom gymnasium environments."""
|
| 2 |
+
|
env/drone_3d.py
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import gymnasium as gym
|
| 3 |
+
from opensimplex import OpenSimplex
|
| 4 |
+
import time
|
| 5 |
+
|
| 6 |
+
# Try to import PyFlyt, else fallback
|
| 7 |
+
try:
|
| 8 |
+
from PyFlyt.gym_envs.quadx_envs.quadx_waypoints_env import QuadXWaypointsEnv
|
| 9 |
+
PYFLYT_AVAILABLE = True
|
| 10 |
+
except ImportError:
|
| 11 |
+
PYFLYT_AVAILABLE = False
|
| 12 |
+
# Create a dummy base class if PyFlyt is missing
|
| 13 |
+
class QuadXWaypointsEnv(gym.Env):
|
| 14 |
+
metadata = {"render_modes": ["human", "rgb_array"]}
|
| 15 |
+
def __init__(self, render_mode=None):
|
| 16 |
+
self.render_mode = render_mode
|
| 17 |
+
|
| 18 |
+
class WindField:
|
| 19 |
+
def __init__(self, seed=42, scale=0.1, speed=1.0):
|
| 20 |
+
self.noise = OpenSimplex(seed=seed)
|
| 21 |
+
self.scale = scale
|
| 22 |
+
self.speed = speed
|
| 23 |
+
self.time_offset = 0.0
|
| 24 |
+
|
| 25 |
+
def get_wind(self, x, y, z, dt):
|
| 26 |
+
self.time_offset += dt * self.speed
|
| 27 |
+
u = self.noise.noise4(x * self.scale, y * self.scale, z * self.scale, self.time_offset)
|
| 28 |
+
v = self.noise.noise4(x * self.scale + 100, y * self.scale + 100, z * self.scale, self.time_offset)
|
| 29 |
+
w = self.noise.noise4(x * self.scale + 200, y * self.scale + 200, z * self.scale, self.time_offset)
|
| 30 |
+
return np.array([u, v, w])
|
| 31 |
+
|
| 32 |
+
class Drone3DEnv(gym.Env):
|
| 33 |
+
def __init__(self, render_mode=None, wind_scale=10.0, wind_speed=1.0):
|
| 34 |
+
super().__init__()
|
| 35 |
+
self.render_mode = render_mode
|
| 36 |
+
self.wind_field = WindField(scale=0.05, speed=wind_speed)
|
| 37 |
+
self.wind_strength = wind_scale
|
| 38 |
+
|
| 39 |
+
# Define spaces
|
| 40 |
+
# Obs: [x, y, z, roll, pitch, yaw, u, v, w, p, q, r]
|
| 41 |
+
self.observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(12,), dtype=np.float32)
|
| 42 |
+
# Action: [motor1, motor2, motor3, motor4] or [thrust, roll, pitch, yaw]
|
| 43 |
+
# We'll assume simple [thrust_x, thrust_y, thrust_z, yaw] for the mock
|
| 44 |
+
self.action_space = gym.spaces.Box(low=-1, high=1, shape=(4,), dtype=np.float32)
|
| 45 |
+
|
| 46 |
+
self.state = np.zeros(12)
|
| 47 |
+
self.dt = 0.05
|
| 48 |
+
self.step_count = 0
|
| 49 |
+
self.max_steps = 1000
|
| 50 |
+
|
| 51 |
+
def reset(self, seed=None, options=None):
|
| 52 |
+
super().reset(seed=seed)
|
| 53 |
+
self.state = np.zeros(12)
|
| 54 |
+
self.state[2] = 10.0 # Start at 10m height
|
| 55 |
+
self.step_count = 0
|
| 56 |
+
|
| 57 |
+
# Randomize Target
|
| 58 |
+
# x, y in [-5, 5], z in [5, 15]
|
| 59 |
+
self.target = np.random.uniform(low=[-5, -5, 5], high=[5, 5, 15])
|
| 60 |
+
|
| 61 |
+
# Randomize Wind
|
| 62 |
+
# We can just re-initialize the noise with a random seed
|
| 63 |
+
new_seed = np.random.randint(0, 10000)
|
| 64 |
+
self.wind_field = WindField(seed=new_seed, scale=0.05, speed=self.wind_field.speed)
|
| 65 |
+
|
| 66 |
+
return self.state.astype(np.float32), {}
|
| 67 |
+
|
| 68 |
+
def step(self, action):
|
| 69 |
+
self.step_count += 1
|
| 70 |
+
|
| 71 |
+
# Unpack state
|
| 72 |
+
pos = self.state[0:3]
|
| 73 |
+
vel = self.state[6:9]
|
| 74 |
+
|
| 75 |
+
# Get Wind
|
| 76 |
+
raw_wind = self.wind_field.get_wind(pos[0], pos[1], pos[2], self.dt)
|
| 77 |
+
wind_force = raw_wind * self.wind_strength
|
| 78 |
+
|
| 79 |
+
# Simple Kinematics (Double Integrator)
|
| 80 |
+
# Action is roughly acceleration command
|
| 81 |
+
# We need enough authority to fight gravity (9.81) + wind
|
| 82 |
+
# Let's say max thrust is 20 m/s^2 (~2G)
|
| 83 |
+
# Action [-1, 1] -> [-20, 20] ?
|
| 84 |
+
# No, usually thrust is positive 0..Max.
|
| 85 |
+
# But for simplified "QuadX" control often inputs are roll/pitch/yaw/thrust.
|
| 86 |
+
# Here we are abstracting to "Force/Accel command in 3D".
|
| 87 |
+
# Let's map action [-1, 1] to [-15, 15] acceleration.
|
| 88 |
+
accel = action[:3] * 15.0
|
| 89 |
+
|
| 90 |
+
# Gravity
|
| 91 |
+
gravity = np.array([0, 0, -9.81])
|
| 92 |
+
|
| 93 |
+
# Total Force = Control + Wind + Gravity
|
| 94 |
+
# Note: We REMOVED the "anti-gravity" offset.
|
| 95 |
+
# The agent MUST output positive Z acceleration to hover.
|
| 96 |
+
# If action[2] is 0, accel[2] is 0, and it falls due to gravity.
|
| 97 |
+
total_accel = accel + wind_force + gravity
|
| 98 |
+
|
| 99 |
+
# Update State
|
| 100 |
+
vel += total_accel * self.dt
|
| 101 |
+
pos += vel * self.dt
|
| 102 |
+
|
| 103 |
+
# Floor collision
|
| 104 |
+
if pos[2] < 0:
|
| 105 |
+
pos[2] = 0
|
| 106 |
+
vel[2] = 0 # Crash stop
|
| 107 |
+
|
| 108 |
+
# Drag (Damping)
|
| 109 |
+
vel *= 0.95
|
| 110 |
+
|
| 111 |
+
self.state[0:3] = pos
|
| 112 |
+
self.state[6:9] = vel
|
| 113 |
+
|
| 114 |
+
# Reward: Stay close to Target
|
| 115 |
+
dist = np.linalg.norm(pos - self.target)
|
| 116 |
+
|
| 117 |
+
# Smoothness: Penalty for high velocity (instability)
|
| 118 |
+
vel_mag = np.linalg.norm(vel)
|
| 119 |
+
|
| 120 |
+
# Components:
|
| 121 |
+
# 1. Distance Reward: Higher is better (closer to 0)
|
| 122 |
+
r_dist = -dist
|
| 123 |
+
|
| 124 |
+
# 2. Stability Penalty: Penalize erratic high-speed movements if far from target
|
| 125 |
+
# But we need speed to get there. Let's just penalize extreme speed.
|
| 126 |
+
r_vel = -0.01 * vel_mag
|
| 127 |
+
|
| 128 |
+
# 3. Survival Reward: Bonus for not crashing
|
| 129 |
+
r_survive = 0.1
|
| 130 |
+
|
| 131 |
+
reward = r_dist + r_vel + r_survive
|
| 132 |
+
|
| 133 |
+
# Terminate if crashed or too far
|
| 134 |
+
term = False # Let it crash and stay on floor
|
| 135 |
+
trunc = self.step_count >= self.max_steps
|
| 136 |
+
|
| 137 |
+
info = {"wind": wind_force, "target": self.target}
|
| 138 |
+
|
| 139 |
+
return self.state.astype(np.float32), reward, term, trunc, info
|
| 140 |
+
|
| 141 |
+
def render(self):
|
| 142 |
+
# We will handle rendering in the demo script using matplotlib
|
| 143 |
+
pass
|
env/drone_env.py
ADDED
|
@@ -0,0 +1,286 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
A 2D drone environment with dynamic wind forces for reinforcement learning.
|
| 3 |
+
The drone can apply discrete thrust actions while being affected by smoothly varying wind.
|
| 4 |
+
The goal is to navigate and survive within the bounded world.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import numpy as np
|
| 8 |
+
import gymnasium as gym
|
| 9 |
+
from gymnasium import spaces
|
| 10 |
+
from typing import Optional, Tuple, Dict, Any
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
# Constants
|
| 14 |
+
DT = 0.1 # Time step
|
| 15 |
+
MAX_VEL = 2.0 # Maximum velocity magnitude
|
| 16 |
+
WIND_MAX = 2.0 # Maximum wind magnitude
|
| 17 |
+
WIND_SMOOTHING = 0.05 # Wind interpolation rate toward target
|
| 18 |
+
WIND_TARGET_INTERVAL = 50 # Steps between sampling new wind target
|
| 19 |
+
MAX_STEPS = 500 # Maximum episode length
|
| 20 |
+
POSITION_MIN = 0.0 # Minimum position (x, y)
|
| 21 |
+
POSITION_MAX = 1.0 # Maximum position (x, y)
|
| 22 |
+
THRUST = 0.25 # Thrust magnitude per action (slightly higher for control authority)
|
| 23 |
+
|
| 24 |
+
# Target zone (box) constants
|
| 25 |
+
TARGET_X_MIN = 0.7 # Target box left edge
|
| 26 |
+
TARGET_X_MAX = 0.9 # Target box right edge
|
| 27 |
+
TARGET_Y_MIN = 0.3 # Target box bottom edge
|
| 28 |
+
TARGET_Y_MAX = 0.7 # Target box top edge
|
| 29 |
+
TARGET_REWARD = 2.0 # Bonus reward for being in target zone
|
| 30 |
+
TARGET_SPAWN_DELAY = 50 # Steps before target zone appears (after wind starts)
|
| 31 |
+
|
| 32 |
+
# Stabilization and shaping
|
| 33 |
+
DRAG_COEFF = 0.3 # Linear velocity drag coefficient
|
| 34 |
+
SPEED_PENALTY_COEFF = 0.05 # Penalize high speeds to encourage smooth control
|
| 35 |
+
EDGE_MARGIN = 0.06 # Margin near boundaries where penalty increases
|
| 36 |
+
EDGE_PENALTY_COEFF = 0.5 # Strength of boundary proximity penalty
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
class DroneWindEnv(gym.Env):
|
| 40 |
+
"""
|
| 41 |
+
A 2D drone environment with dynamic wind.
|
| 42 |
+
|
| 43 |
+
Observation: [x, y, vx, vy, wind_x, wind_y]
|
| 44 |
+
Action: Discrete(5) - 0: no thrust, 1: up, 2: down, 3: left, 4: right
|
| 45 |
+
"""
|
| 46 |
+
|
| 47 |
+
metadata = {"render_modes": ["human", "rgb_array"], "render_fps": 10}
|
| 48 |
+
|
| 49 |
+
def __init__(self):
|
| 50 |
+
super().__init__()
|
| 51 |
+
|
| 52 |
+
# Observation space: [x, y, vx, vy, wind_x, wind_y]
|
| 53 |
+
self.observation_space = spaces.Box(
|
| 54 |
+
low=np.array([POSITION_MIN, POSITION_MIN, -MAX_VEL, -MAX_VEL, -WIND_MAX, -WIND_MAX], dtype=np.float32),
|
| 55 |
+
high=np.array([POSITION_MAX, POSITION_MAX, MAX_VEL, MAX_VEL, WIND_MAX, WIND_MAX], dtype=np.float32),
|
| 56 |
+
dtype=np.float32
|
| 57 |
+
)
|
| 58 |
+
|
| 59 |
+
# Action space: 5 discrete thrust directions
|
| 60 |
+
self.action_space = spaces.Discrete(5)
|
| 61 |
+
|
| 62 |
+
# Internal state
|
| 63 |
+
self.x: float = 0.0
|
| 64 |
+
self.y: float = 0.0
|
| 65 |
+
self.vx: float = 0.0
|
| 66 |
+
self.vy: float = 0.0
|
| 67 |
+
self.wind_x: float = 0.0
|
| 68 |
+
self.wind_y: float = 0.0
|
| 69 |
+
self.wind_target_x: float = 0.0
|
| 70 |
+
self.wind_target_y: float = 0.0
|
| 71 |
+
self.step_count: int = 0
|
| 72 |
+
|
| 73 |
+
def reset(
|
| 74 |
+
self,
|
| 75 |
+
*,
|
| 76 |
+
seed: Optional[int] = None,
|
| 77 |
+
options: Optional[Dict[str, Any]] = None
|
| 78 |
+
) -> Tuple[np.ndarray, Dict[str, Any]]:
|
| 79 |
+
"""
|
| 80 |
+
Reset the environment to initial state.
|
| 81 |
+
|
| 82 |
+
Args:
|
| 83 |
+
seed: Optional random seed
|
| 84 |
+
options: Optional reset options
|
| 85 |
+
|
| 86 |
+
Returns:
|
| 87 |
+
observation: Initial observation array
|
| 88 |
+
info: Empty info dict
|
| 89 |
+
"""
|
| 90 |
+
# Always call super().reset to ensure seeding and np_random are initialized
|
| 91 |
+
super().reset(seed=seed)
|
| 92 |
+
|
| 93 |
+
# Initialize state
|
| 94 |
+
self.x = 0.1
|
| 95 |
+
self.y = 0.5
|
| 96 |
+
self.vx = 0.0
|
| 97 |
+
self.vy = 0.0
|
| 98 |
+
self.wind_x = 0.0
|
| 99 |
+
self.wind_y = 0.0
|
| 100 |
+
self.wind_target_x = 0.0
|
| 101 |
+
self.wind_target_y = 0.0
|
| 102 |
+
self.step_count = 0
|
| 103 |
+
|
| 104 |
+
# Build observation
|
| 105 |
+
obs = self._get_observation()
|
| 106 |
+
info = {}
|
| 107 |
+
|
| 108 |
+
return obs, info
|
| 109 |
+
|
| 110 |
+
def step(self, action: int) -> Tuple[np.ndarray, float, bool, bool, Dict[str, Any]]:
|
| 111 |
+
"""
|
| 112 |
+
Execute one environment step.
|
| 113 |
+
|
| 114 |
+
Args:
|
| 115 |
+
action: Discrete action (0-4)
|
| 116 |
+
|
| 117 |
+
Returns:
|
| 118 |
+
observation: New observation array
|
| 119 |
+
reward: Reward for this step
|
| 120 |
+
terminated: Whether episode ended due to boundary crash
|
| 121 |
+
truncated: Whether episode ended due to max steps
|
| 122 |
+
info: Info dict with step_count
|
| 123 |
+
"""
|
| 124 |
+
# Increment step count
|
| 125 |
+
self.step_count += 1
|
| 126 |
+
|
| 127 |
+
# Update wind model
|
| 128 |
+
self._update_wind()
|
| 129 |
+
|
| 130 |
+
# Apply physics update
|
| 131 |
+
self._apply_physics(action)
|
| 132 |
+
|
| 133 |
+
# Compute reward
|
| 134 |
+
base_reward = 1.0 # Survival reward
|
| 135 |
+
|
| 136 |
+
# Check if drone is in target zone (only if target has spawned)
|
| 137 |
+
target_spawned = self.step_count >= TARGET_SPAWN_DELAY
|
| 138 |
+
in_target = False
|
| 139 |
+
if target_spawned:
|
| 140 |
+
in_target = (
|
| 141 |
+
TARGET_X_MIN <= self.x <= TARGET_X_MAX and
|
| 142 |
+
TARGET_Y_MIN <= self.y <= TARGET_Y_MAX
|
| 143 |
+
)
|
| 144 |
+
target_bonus = TARGET_REWARD if in_target else 0.0
|
| 145 |
+
|
| 146 |
+
# Speed penalty (discourage excessive velocity)
|
| 147 |
+
speed_sq = self.vx * self.vx + self.vy * self.vy
|
| 148 |
+
speed_penalty = -SPEED_PENALTY_COEFF * float(speed_sq)
|
| 149 |
+
# Boundary proximity penalty (discourage hovering near walls)
|
| 150 |
+
dist_left = self.x - POSITION_MIN
|
| 151 |
+
dist_right = POSITION_MAX - self.x
|
| 152 |
+
dist_bottom = self.y - POSITION_MIN
|
| 153 |
+
dist_top = POSITION_MAX - self.y
|
| 154 |
+
min_dist = min(dist_left, dist_right, dist_bottom, dist_top)
|
| 155 |
+
edge_penalty = 0.0
|
| 156 |
+
if min_dist < EDGE_MARGIN:
|
| 157 |
+
edge_penalty = -EDGE_PENALTY_COEFF * (EDGE_MARGIN - float(min_dist)) / EDGE_MARGIN
|
| 158 |
+
|
| 159 |
+
reward = base_reward + target_bonus + speed_penalty + edge_penalty
|
| 160 |
+
|
| 161 |
+
# Check termination (boundary crash)
|
| 162 |
+
terminated = (
|
| 163 |
+
self.x <= POSITION_MIN or
|
| 164 |
+
self.x >= POSITION_MAX or
|
| 165 |
+
self.y <= POSITION_MIN or
|
| 166 |
+
self.y >= POSITION_MAX
|
| 167 |
+
)
|
| 168 |
+
|
| 169 |
+
# Check truncation (max steps)
|
| 170 |
+
truncated = self.step_count >= MAX_STEPS
|
| 171 |
+
|
| 172 |
+
# Build observation
|
| 173 |
+
obs = self._get_observation()
|
| 174 |
+
|
| 175 |
+
# Check if in target zone (only if target has spawned)
|
| 176 |
+
target_spawned = self.step_count >= TARGET_SPAWN_DELAY
|
| 177 |
+
in_target = False
|
| 178 |
+
if target_spawned:
|
| 179 |
+
in_target = (
|
| 180 |
+
TARGET_X_MIN <= self.x <= TARGET_X_MAX and
|
| 181 |
+
TARGET_Y_MIN <= self.y <= TARGET_Y_MAX
|
| 182 |
+
)
|
| 183 |
+
info = {"step_count": self.step_count, "in_target": in_target, "target_spawned": target_spawned}
|
| 184 |
+
|
| 185 |
+
return obs, reward, terminated, truncated, info
|
| 186 |
+
|
| 187 |
+
def _update_wind(self) -> None:
|
| 188 |
+
"""Update wind by smoothly moving toward target, resampling target periodically."""
|
| 189 |
+
# Resample wind target every WIND_TARGET_INTERVAL steps
|
| 190 |
+
if self.step_count % WIND_TARGET_INTERVAL == 0:
|
| 191 |
+
self.wind_target_x = self.np_random.uniform(-WIND_MAX, WIND_MAX)
|
| 192 |
+
self.wind_target_y = self.np_random.uniform(-WIND_MAX, WIND_MAX)
|
| 193 |
+
|
| 194 |
+
# Smoothly interpolate wind toward target
|
| 195 |
+
self.wind_x += WIND_SMOOTHING * (self.wind_target_x - self.wind_x)
|
| 196 |
+
self.wind_y += WIND_SMOOTHING * (self.wind_target_y - self.wind_y)
|
| 197 |
+
|
| 198 |
+
# Clamp wind to bounds
|
| 199 |
+
self.wind_x = np.clip(self.wind_x, -WIND_MAX, WIND_MAX)
|
| 200 |
+
self.wind_y = np.clip(self.wind_y, -WIND_MAX, WIND_MAX)
|
| 201 |
+
|
| 202 |
+
def _apply_physics(self, action: int) -> None:
|
| 203 |
+
"""Apply physics update: convert action to thrust, update velocity and position."""
|
| 204 |
+
# Convert action to thrust vector
|
| 205 |
+
if action == 0: # No thrust
|
| 206 |
+
ax, ay = 0.0, 0.0
|
| 207 |
+
elif action == 1: # Thrust up
|
| 208 |
+
ax, ay = 0.0, THRUST
|
| 209 |
+
elif action == 2: # Thrust down
|
| 210 |
+
ax, ay = 0.0, -THRUST
|
| 211 |
+
elif action == 3: # Thrust left
|
| 212 |
+
ax, ay = -THRUST, 0.0
|
| 213 |
+
elif action == 4: # Thrust right
|
| 214 |
+
ax, ay = THRUST, 0.0
|
| 215 |
+
else:
|
| 216 |
+
raise ValueError(f"Invalid action: {action}. Must be in [0, 4]")
|
| 217 |
+
|
| 218 |
+
# Update velocity with thrust and wind
|
| 219 |
+
self.vx = self.vx + ax + self.wind_x * DT
|
| 220 |
+
self.vy = self.vy + ay + self.wind_y * DT
|
| 221 |
+
# Apply linear drag (proportional to velocity) for stability
|
| 222 |
+
self.vx -= DRAG_COEFF * self.vx * DT
|
| 223 |
+
self.vy -= DRAG_COEFF * self.vy * DT
|
| 224 |
+
|
| 225 |
+
# Clamp velocity
|
| 226 |
+
self.vx = np.clip(self.vx, -MAX_VEL, MAX_VEL)
|
| 227 |
+
self.vy = np.clip(self.vy, -MAX_VEL, MAX_VEL)
|
| 228 |
+
|
| 229 |
+
# Update position
|
| 230 |
+
self.x = self.x + self.vx * DT
|
| 231 |
+
self.y = self.y + self.vy * DT
|
| 232 |
+
|
| 233 |
+
# Clamp position to bounds
|
| 234 |
+
self.x = np.clip(self.x, POSITION_MIN, POSITION_MAX)
|
| 235 |
+
self.y = np.clip(self.y, POSITION_MIN, POSITION_MAX)
|
| 236 |
+
|
| 237 |
+
def _get_observation(self) -> np.ndarray:
|
| 238 |
+
"""Build observation array from current state."""
|
| 239 |
+
return np.array(
|
| 240 |
+
[self.x, self.y, self.vx, self.vy, self.wind_x, self.wind_y],
|
| 241 |
+
dtype=np.float32
|
| 242 |
+
)
|
| 243 |
+
|
| 244 |
+
def render(self) -> None:
|
| 245 |
+
"""
|
| 246 |
+
Render the environment state (stub implementation for Phase 1).
|
| 247 |
+
Prints state to stdout.
|
| 248 |
+
"""
|
| 249 |
+
print(
|
| 250 |
+
f"Step {self.step_count}: "
|
| 251 |
+
f"x={self.x:.2f}, y={self.y:.2f}, "
|
| 252 |
+
f"vx={self.vx:.2f}, vy={self.vy:.2f}, "
|
| 253 |
+
f"wind=({self.wind_x:.2f}, {self.wind_y:.2f})"
|
| 254 |
+
)
|
| 255 |
+
|
| 256 |
+
|
| 257 |
+
def make_drone_env() -> DroneWindEnv:
|
| 258 |
+
"""Helper function to create a DroneWindEnv instance."""
|
| 259 |
+
return DroneWindEnv()
|
| 260 |
+
|
| 261 |
+
|
| 262 |
+
if __name__ == "__main__":
|
| 263 |
+
# Manual test block
|
| 264 |
+
print("Testing DroneWindEnv...")
|
| 265 |
+
print("=" * 60)
|
| 266 |
+
|
| 267 |
+
env = make_drone_env()
|
| 268 |
+
obs, info = env.reset(seed=42)
|
| 269 |
+
print(f"Initial observation: {obs}")
|
| 270 |
+
print()
|
| 271 |
+
|
| 272 |
+
for t in range(200):
|
| 273 |
+
action = env.action_space.sample()
|
| 274 |
+
obs, reward, terminated, truncated, info = env.step(action)
|
| 275 |
+
env.render()
|
| 276 |
+
|
| 277 |
+
if terminated:
|
| 278 |
+
print(f"\nEpisode terminated at step {t} (boundary crash)")
|
| 279 |
+
break
|
| 280 |
+
if truncated:
|
| 281 |
+
print(f"\nEpisode truncated at step {t} (max steps reached)")
|
| 282 |
+
break
|
| 283 |
+
|
| 284 |
+
print("=" * 60)
|
| 285 |
+
print("Test completed!")
|
| 286 |
+
|
eval/__init__.py
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Evaluation scripts and utilities."""
|
| 2 |
+
|
eval/eval_liquid_policy.py
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Evaluate a trained PPO Liquid Neural Network policy on the DroneWindEnv environment.
|
| 3 |
+
|
| 4 |
+
This script loads a saved PPO model with liquid policy and runs evaluation episodes,
|
| 5 |
+
printing statistics about average reward and episode length.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
import os
|
| 9 |
+
import sys
|
| 10 |
+
import argparse
|
| 11 |
+
import numpy as np
|
| 12 |
+
from stable_baselines3 import PPO
|
| 13 |
+
|
| 14 |
+
# Add project root to path
|
| 15 |
+
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 16 |
+
|
| 17 |
+
from env.drone_env import DroneWindEnv
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
def main():
|
| 21 |
+
"""Main evaluation function."""
|
| 22 |
+
parser = argparse.ArgumentParser(description="Evaluate PPO Liquid NN agent on DroneWindEnv")
|
| 23 |
+
parser.add_argument(
|
| 24 |
+
"--model-path",
|
| 25 |
+
type=str,
|
| 26 |
+
default="models/liquid_policy.zip",
|
| 27 |
+
help="Path to the trained model (default: models/liquid_policy.zip)"
|
| 28 |
+
)
|
| 29 |
+
parser.add_argument(
|
| 30 |
+
"--episodes",
|
| 31 |
+
type=int,
|
| 32 |
+
default=10,
|
| 33 |
+
help="Number of evaluation episodes (default: 10)"
|
| 34 |
+
)
|
| 35 |
+
parser.add_argument(
|
| 36 |
+
"--render",
|
| 37 |
+
action="store_true",
|
| 38 |
+
help="Print environment state to console during evaluation"
|
| 39 |
+
)
|
| 40 |
+
parser.add_argument(
|
| 41 |
+
"--seed",
|
| 42 |
+
type=int,
|
| 43 |
+
default=None,
|
| 44 |
+
help="Random seed for evaluation (default: None)"
|
| 45 |
+
)
|
| 46 |
+
|
| 47 |
+
args = parser.parse_args()
|
| 48 |
+
|
| 49 |
+
print("=" * 60)
|
| 50 |
+
print("Evaluating PPO Liquid NN Agent on DroneWindEnv")
|
| 51 |
+
print("=" * 60)
|
| 52 |
+
print(f"Model path: {args.model_path}")
|
| 53 |
+
print(f"Number of episodes: {args.episodes}")
|
| 54 |
+
print("=" * 60)
|
| 55 |
+
|
| 56 |
+
# Check if model file exists
|
| 57 |
+
if not os.path.exists(args.model_path):
|
| 58 |
+
print(f"\nError: Model file not found at {args.model_path}")
|
| 59 |
+
print("Please train a model first using:")
|
| 60 |
+
print(" python train/train_liquid_ppo.py")
|
| 61 |
+
return
|
| 62 |
+
|
| 63 |
+
# Create environment
|
| 64 |
+
print("\nCreating environment...")
|
| 65 |
+
env = DroneWindEnv()
|
| 66 |
+
|
| 67 |
+
# Load the model
|
| 68 |
+
print(f"Loading model from {args.model_path}...")
|
| 69 |
+
try:
|
| 70 |
+
model = PPO.load(args.model_path, env=env)
|
| 71 |
+
print("Model loaded successfully!")
|
| 72 |
+
except Exception as e:
|
| 73 |
+
print(f"\nError loading model: {e}")
|
| 74 |
+
return
|
| 75 |
+
|
| 76 |
+
# Run evaluation episodes
|
| 77 |
+
print(f"\nRunning {args.episodes} evaluation episodes...")
|
| 78 |
+
print("-" * 60)
|
| 79 |
+
|
| 80 |
+
rewards = []
|
| 81 |
+
episode_lengths = []
|
| 82 |
+
|
| 83 |
+
for episode in range(args.episodes):
|
| 84 |
+
obs, info = env.reset(seed=args.seed)
|
| 85 |
+
done = False
|
| 86 |
+
truncated = False
|
| 87 |
+
total_reward = 0.0
|
| 88 |
+
step_count = 0
|
| 89 |
+
|
| 90 |
+
if args.render:
|
| 91 |
+
print(f"\nEpisode {episode + 1}:")
|
| 92 |
+
env.render()
|
| 93 |
+
|
| 94 |
+
while not (done or truncated):
|
| 95 |
+
# Get action from the model (deterministic)
|
| 96 |
+
action, _ = model.predict(obs, deterministic=True)
|
| 97 |
+
|
| 98 |
+
# Step the environment
|
| 99 |
+
obs, reward, done, truncated, info = env.step(action)
|
| 100 |
+
|
| 101 |
+
total_reward += reward
|
| 102 |
+
step_count += 1
|
| 103 |
+
|
| 104 |
+
if args.render:
|
| 105 |
+
env.render()
|
| 106 |
+
|
| 107 |
+
rewards.append(total_reward)
|
| 108 |
+
episode_lengths.append(step_count)
|
| 109 |
+
|
| 110 |
+
status = "terminated" if done else "truncated"
|
| 111 |
+
print(f"Episode {episode + 1}: Reward = {total_reward:.2f}, "
|
| 112 |
+
f"Length = {step_count} steps ({status})")
|
| 113 |
+
|
| 114 |
+
# Print statistics
|
| 115 |
+
print("\n" + "=" * 60)
|
| 116 |
+
print("Evaluation Results")
|
| 117 |
+
print("=" * 60)
|
| 118 |
+
print(f"Average reward: {np.mean(rewards):.2f} ± {np.std(rewards):.2f}")
|
| 119 |
+
print(f"Average episode length: {np.mean(episode_lengths):.1f} ± {np.std(episode_lengths):.1f}")
|
| 120 |
+
print(f"Average survival time: {np.mean(episode_lengths):.1f} steps")
|
| 121 |
+
print(f"Min reward: {np.min(rewards):.2f}")
|
| 122 |
+
print(f"Max reward: {np.max(rewards):.2f}")
|
| 123 |
+
print(f"Min episode length: {np.min(episode_lengths)}")
|
| 124 |
+
print(f"Max episode length: {np.max(episode_lengths)}")
|
| 125 |
+
print("=" * 60)
|
| 126 |
+
|
| 127 |
+
# Print per-episode rewards
|
| 128 |
+
print("\nPer-episode rewards:")
|
| 129 |
+
for i, reward in enumerate(rewards, 1):
|
| 130 |
+
print(f" Episode {i}: {reward:.2f}")
|
| 131 |
+
|
| 132 |
+
# Optional: Try to plot if matplotlib is available
|
| 133 |
+
try:
|
| 134 |
+
import matplotlib.pyplot as plt
|
| 135 |
+
|
| 136 |
+
plt.figure(figsize=(10, 5))
|
| 137 |
+
|
| 138 |
+
# Plot 1: Episode rewards
|
| 139 |
+
plt.subplot(1, 2, 1)
|
| 140 |
+
plt.plot(range(1, len(rewards) + 1), rewards, 'o-', linewidth=2, markersize=6)
|
| 141 |
+
plt.axhline(y=np.mean(rewards), color='r', linestyle='--', label=f'Mean: {np.mean(rewards):.2f}')
|
| 142 |
+
plt.xlabel('Episode')
|
| 143 |
+
plt.ylabel('Total Reward')
|
| 144 |
+
plt.title('Episode Rewards (Liquid NN)')
|
| 145 |
+
plt.grid(True, alpha=0.3)
|
| 146 |
+
plt.legend()
|
| 147 |
+
|
| 148 |
+
# Plot 2: Episode lengths
|
| 149 |
+
plt.subplot(1, 2, 2)
|
| 150 |
+
plt.plot(range(1, len(episode_lengths) + 1), episode_lengths, 's-',
|
| 151 |
+
linewidth=2, markersize=6, color='green')
|
| 152 |
+
plt.axhline(y=np.mean(episode_lengths), color='r', linestyle='--',
|
| 153 |
+
label=f'Mean: {np.mean(episode_lengths):.1f}')
|
| 154 |
+
plt.xlabel('Episode')
|
| 155 |
+
plt.ylabel('Episode Length')
|
| 156 |
+
plt.title('Episode Lengths (Liquid NN)')
|
| 157 |
+
plt.grid(True, alpha=0.3)
|
| 158 |
+
plt.legend()
|
| 159 |
+
|
| 160 |
+
plt.tight_layout()
|
| 161 |
+
plt.savefig('eval_liquid_results.png', dpi=150, bbox_inches='tight')
|
| 162 |
+
print("\n✓ Evaluation plots saved to eval_liquid_results.png")
|
| 163 |
+
print(" (Close the plot window to continue)")
|
| 164 |
+
plt.show(block=False)
|
| 165 |
+
plt.pause(2) # Show for 2 seconds
|
| 166 |
+
plt.close()
|
| 167 |
+
|
| 168 |
+
except ImportError:
|
| 169 |
+
# Matplotlib not available, skip plotting
|
| 170 |
+
pass
|
| 171 |
+
except Exception as e:
|
| 172 |
+
print(f"\nNote: Could not generate plots: {e}")
|
| 173 |
+
|
| 174 |
+
|
| 175 |
+
if __name__ == "__main__":
|
| 176 |
+
main()
|
| 177 |
+
|
eval/eval_mlp_baseline.py
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Evaluate a trained PPO MLP baseline on the DroneWindEnv environment.
|
| 3 |
+
|
| 4 |
+
This script loads a saved PPO model and runs evaluation episodes,
|
| 5 |
+
printing statistics about average reward and episode length.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
import os
|
| 9 |
+
import sys
|
| 10 |
+
import argparse
|
| 11 |
+
import numpy as np
|
| 12 |
+
from stable_baselines3 import PPO
|
| 13 |
+
|
| 14 |
+
# Add project root to path
|
| 15 |
+
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 16 |
+
|
| 17 |
+
from env.drone_env import DroneWindEnv
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
def main():
|
| 21 |
+
"""Main evaluation function."""
|
| 22 |
+
parser = argparse.ArgumentParser(description="Evaluate PPO agent on DroneWindEnv")
|
| 23 |
+
parser.add_argument(
|
| 24 |
+
"--model-path",
|
| 25 |
+
type=str,
|
| 26 |
+
default="models/mlp_baseline.zip",
|
| 27 |
+
help="Path to the trained model (default: models/mlp_baseline.zip)"
|
| 28 |
+
)
|
| 29 |
+
parser.add_argument(
|
| 30 |
+
"--episodes",
|
| 31 |
+
type=int,
|
| 32 |
+
default=10,
|
| 33 |
+
help="Number of evaluation episodes (default: 10)"
|
| 34 |
+
)
|
| 35 |
+
parser.add_argument(
|
| 36 |
+
"--render",
|
| 37 |
+
action="store_true",
|
| 38 |
+
help="Print environment state to console during evaluation"
|
| 39 |
+
)
|
| 40 |
+
parser.add_argument(
|
| 41 |
+
"--seed",
|
| 42 |
+
type=int,
|
| 43 |
+
default=None,
|
| 44 |
+
help="Random seed for evaluation (default: None)"
|
| 45 |
+
)
|
| 46 |
+
|
| 47 |
+
args = parser.parse_args()
|
| 48 |
+
|
| 49 |
+
print("=" * 60)
|
| 50 |
+
print("Evaluating PPO Agent on DroneWindEnv")
|
| 51 |
+
print("=" * 60)
|
| 52 |
+
print(f"Model path: {args.model_path}")
|
| 53 |
+
print(f"Number of episodes: {args.episodes}")
|
| 54 |
+
print("=" * 60)
|
| 55 |
+
|
| 56 |
+
# Check if model file exists
|
| 57 |
+
if not os.path.exists(args.model_path):
|
| 58 |
+
print(f"\nError: Model file not found at {args.model_path}")
|
| 59 |
+
print("Please train a model first using:")
|
| 60 |
+
print(" python train/train_mlp_ppo.py")
|
| 61 |
+
return
|
| 62 |
+
|
| 63 |
+
# Create environment
|
| 64 |
+
print("\nCreating environment...")
|
| 65 |
+
env = DroneWindEnv()
|
| 66 |
+
|
| 67 |
+
# Load the model
|
| 68 |
+
print(f"Loading model from {args.model_path}...")
|
| 69 |
+
try:
|
| 70 |
+
model = PPO.load(args.model_path, env=env)
|
| 71 |
+
print("Model loaded successfully!")
|
| 72 |
+
except Exception as e:
|
| 73 |
+
print(f"\nError loading model: {e}")
|
| 74 |
+
return
|
| 75 |
+
|
| 76 |
+
# Run evaluation episodes
|
| 77 |
+
print(f"\nRunning {args.episodes} evaluation episodes...")
|
| 78 |
+
print("-" * 60)
|
| 79 |
+
|
| 80 |
+
rewards = []
|
| 81 |
+
episode_lengths = []
|
| 82 |
+
|
| 83 |
+
for episode in range(args.episodes):
|
| 84 |
+
obs, info = env.reset(seed=args.seed)
|
| 85 |
+
done = False
|
| 86 |
+
truncated = False
|
| 87 |
+
total_reward = 0.0
|
| 88 |
+
step_count = 0
|
| 89 |
+
|
| 90 |
+
if args.render:
|
| 91 |
+
print(f"\nEpisode {episode + 1}:")
|
| 92 |
+
env.render()
|
| 93 |
+
|
| 94 |
+
while not (done or truncated):
|
| 95 |
+
# Get action from the model (deterministic)
|
| 96 |
+
action, _ = model.predict(obs, deterministic=True)
|
| 97 |
+
|
| 98 |
+
# Step the environment
|
| 99 |
+
obs, reward, done, truncated, info = env.step(action)
|
| 100 |
+
|
| 101 |
+
total_reward += reward
|
| 102 |
+
step_count += 1
|
| 103 |
+
|
| 104 |
+
if args.render:
|
| 105 |
+
env.render()
|
| 106 |
+
|
| 107 |
+
rewards.append(total_reward)
|
| 108 |
+
episode_lengths.append(step_count)
|
| 109 |
+
|
| 110 |
+
status = "terminated" if done else "truncated"
|
| 111 |
+
print(f"Episode {episode + 1}: Reward = {total_reward:.2f}, "
|
| 112 |
+
f"Length = {step_count} steps ({status})")
|
| 113 |
+
|
| 114 |
+
# Print statistics
|
| 115 |
+
print("\n" + "=" * 60)
|
| 116 |
+
print("Evaluation Results")
|
| 117 |
+
print("=" * 60)
|
| 118 |
+
print(f"Average reward: {np.mean(rewards):.2f} ± {np.std(rewards):.2f}")
|
| 119 |
+
print(f"Average episode length: {np.mean(episode_lengths):.1f} ± {np.std(episode_lengths):.1f}")
|
| 120 |
+
print(f"Min reward: {np.min(rewards):.2f}")
|
| 121 |
+
print(f"Max reward: {np.max(rewards):.2f}")
|
| 122 |
+
print(f"Min episode length: {np.min(episode_lengths)}")
|
| 123 |
+
print(f"Max episode length: {np.max(episode_lengths)}")
|
| 124 |
+
print("=" * 60)
|
| 125 |
+
|
| 126 |
+
# Print per-episode rewards
|
| 127 |
+
print("\nPer-episode rewards:")
|
| 128 |
+
for i, reward in enumerate(rewards, 1):
|
| 129 |
+
print(f" Episode {i}: {reward:.2f}")
|
| 130 |
+
|
| 131 |
+
# Optional: Try to plot if matplotlib is available
|
| 132 |
+
try:
|
| 133 |
+
import matplotlib.pyplot as plt
|
| 134 |
+
|
| 135 |
+
plt.figure(figsize=(10, 5))
|
| 136 |
+
|
| 137 |
+
# Plot 1: Episode rewards
|
| 138 |
+
plt.subplot(1, 2, 1)
|
| 139 |
+
plt.plot(range(1, len(rewards) + 1), rewards, 'o-', linewidth=2, markersize=6)
|
| 140 |
+
plt.axhline(y=np.mean(rewards), color='r', linestyle='--', label=f'Mean: {np.mean(rewards):.2f}')
|
| 141 |
+
plt.xlabel('Episode')
|
| 142 |
+
plt.ylabel('Total Reward')
|
| 143 |
+
plt.title('Episode Rewards')
|
| 144 |
+
plt.grid(True, alpha=0.3)
|
| 145 |
+
plt.legend()
|
| 146 |
+
|
| 147 |
+
# Plot 2: Episode lengths
|
| 148 |
+
plt.subplot(1, 2, 2)
|
| 149 |
+
plt.plot(range(1, len(episode_lengths) + 1), episode_lengths, 's-',
|
| 150 |
+
linewidth=2, markersize=6, color='green')
|
| 151 |
+
plt.axhline(y=np.mean(episode_lengths), color='r', linestyle='--',
|
| 152 |
+
label=f'Mean: {np.mean(episode_lengths):.1f}')
|
| 153 |
+
plt.xlabel('Episode')
|
| 154 |
+
plt.ylabel('Episode Length')
|
| 155 |
+
plt.title('Episode Lengths')
|
| 156 |
+
plt.grid(True, alpha=0.3)
|
| 157 |
+
plt.legend()
|
| 158 |
+
|
| 159 |
+
plt.tight_layout()
|
| 160 |
+
plt.savefig('eval_results.png', dpi=150, bbox_inches='tight')
|
| 161 |
+
print("\n✓ Evaluation plots saved to eval_results.png")
|
| 162 |
+
print(" (Close the plot window to continue)")
|
| 163 |
+
plt.show(block=False)
|
| 164 |
+
plt.pause(2) # Show for 2 seconds
|
| 165 |
+
plt.close()
|
| 166 |
+
|
| 167 |
+
except ImportError:
|
| 168 |
+
# Matplotlib not available, skip plotting
|
| 169 |
+
pass
|
| 170 |
+
except Exception as e:
|
| 171 |
+
print(f"\nNote: Could not generate plots: {e}")
|
| 172 |
+
|
| 173 |
+
|
| 174 |
+
if __name__ == "__main__":
|
| 175 |
+
main()
|
| 176 |
+
|
install_log.txt
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Defaulting to user installation because normal site-packages is not writeable
|
| 2 |
+
Collecting PyFlyt
|
| 3 |
+
Downloading pyflyt-0.29.0-py3-none-any.whl.metadata (5.2 kB)
|
| 4 |
+
Downloading pyflyt-0.29.0-py3-none-any.whl (215 kB)
|
| 5 |
+
Installing collected packages: PyFlyt
|
| 6 |
+
Successfully installed PyFlyt-0.29.0
|
| 7 |
+
Defaulting to user installation because normal site-packages is not writeable
|
| 8 |
+
Requirement already satisfied: numpy in /usr/lib64/python3.14/site-packages (2.3.5)
|
| 9 |
+
Collecting gymnasium
|
| 10 |
+
Using cached gymnasium-1.2.2-py3-none-any.whl.metadata (10 kB)
|
| 11 |
+
Collecting opensimplex
|
| 12 |
+
Downloading opensimplex-0.4.5.1-py3-none-any.whl.metadata (10 kB)
|
| 13 |
+
Collecting ncps
|
| 14 |
+
Downloading ncps-1.0.1-py3-none-any.whl.metadata (702 bytes)
|
| 15 |
+
Collecting stable-baselines3
|
| 16 |
+
Downloading stable_baselines3-2.7.0-py3-none-any.whl.metadata (4.8 kB)
|
| 17 |
+
Collecting rich
|
| 18 |
+
Downloading rich-14.2.0-py3-none-any.whl.metadata (18 kB)
|
| 19 |
+
Collecting tqdm
|
| 20 |
+
Downloading tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
|
| 21 |
+
Collecting cloudpickle>=1.2.0 (from gymnasium)
|
| 22 |
+
Downloading cloudpickle-3.1.2-py3-none-any.whl.metadata (7.1 kB)
|
| 23 |
+
Requirement already satisfied: typing-extensions>=4.3.0 in /usr/lib/python3.14/site-packages (from gymnasium) (4.15.0)
|
| 24 |
+
Collecting farama-notifications>=0.0.1 (from gymnasium)
|
| 25 |
+
Downloading Farama_Notifications-0.0.4-py3-none-any.whl.metadata (558 bytes)
|
| 26 |
+
Collecting future (from ncps)
|
| 27 |
+
Downloading future-1.0.0-py3-none-any.whl.metadata (4.0 kB)
|
| 28 |
+
Requirement already satisfied: packaging in /usr/lib/python3.14/site-packages (from ncps) (25.0)
|
| 29 |
+
Collecting torch<3.0,>=2.3 (from stable-baselines3)
|
| 30 |
+
Using cached torch-2.9.1-cp314-cp314-manylinux_2_28_x86_64.whl.metadata (30 kB)
|
| 31 |
+
Collecting pandas (from stable-baselines3)
|
| 32 |
+
Downloading pandas-2.3.3-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.metadata (91 kB)
|
| 33 |
+
Collecting matplotlib (from stable-baselines3)
|
| 34 |
+
Downloading matplotlib-3.10.7-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (11 kB)
|
| 35 |
+
Collecting filelock (from torch<3.0,>=2.3->stable-baselines3)
|
| 36 |
+
Downloading filelock-3.20.0-py3-none-any.whl.metadata (2.1 kB)
|
| 37 |
+
Requirement already satisfied: setuptools in /usr/lib/python3.14/site-packages (from torch<3.0,>=2.3->stable-baselines3) (78.1.1)
|
| 38 |
+
Collecting sympy>=1.13.3 (from torch<3.0,>=2.3->stable-baselines3)
|
| 39 |
+
Downloading sympy-1.14.0-py3-none-any.whl.metadata (12 kB)
|
| 40 |
+
Collecting networkx>=2.5.1 (from torch<3.0,>=2.3->stable-baselines3)
|
| 41 |
+
Downloading networkx-3.6-py3-none-any.whl.metadata (6.8 kB)
|
| 42 |
+
Requirement already satisfied: jinja2 in /usr/lib/python3.14/site-packages (from torch<3.0,>=2.3->stable-baselines3) (3.1.6)
|
| 43 |
+
Collecting fsspec>=0.8.5 (from torch<3.0,>=2.3->stable-baselines3)
|
| 44 |
+
Downloading fsspec-2025.10.0-py3-none-any.whl.metadata (10 kB)
|
| 45 |
+
Collecting nvidia-cuda-nvrtc-cu12==12.8.93 (from torch<3.0,>=2.3->stable-baselines3)
|
| 46 |
+
Downloading nvidia_cuda_nvrtc_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.7 kB)
|
| 47 |
+
Collecting nvidia-cuda-runtime-cu12==12.8.90 (from torch<3.0,>=2.3->stable-baselines3)
|
| 48 |
+
Downloading nvidia_cuda_runtime_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.7 kB)
|
| 49 |
+
Collecting nvidia-cuda-cupti-cu12==12.8.90 (from torch<3.0,>=2.3->stable-baselines3)
|
| 50 |
+
Downloading nvidia_cuda_cupti_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.7 kB)
|
| 51 |
+
Collecting nvidia-cudnn-cu12==9.10.2.21 (from torch<3.0,>=2.3->stable-baselines3)
|
| 52 |
+
Downloading nvidia_cudnn_cu12-9.10.2.21-py3-none-manylinux_2_27_x86_64.whl.metadata (1.8 kB)
|
| 53 |
+
Collecting nvidia-cublas-cu12==12.8.4.1 (from torch<3.0,>=2.3->stable-baselines3)
|
| 54 |
+
Downloading nvidia_cublas_cu12-12.8.4.1-py3-none-manylinux_2_27_x86_64.whl.metadata (1.7 kB)
|
| 55 |
+
Collecting nvidia-cufft-cu12==11.3.3.83 (from torch<3.0,>=2.3->stable-baselines3)
|
| 56 |
+
Downloading nvidia_cufft_cu12-11.3.3.83-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.7 kB)
|
| 57 |
+
Collecting nvidia-curand-cu12==10.3.9.90 (from torch<3.0,>=2.3->stable-baselines3)
|
| 58 |
+
Downloading nvidia_curand_cu12-10.3.9.90-py3-none-manylinux_2_27_x86_64.whl.metadata (1.7 kB)
|
| 59 |
+
Collecting nvidia-cusolver-cu12==11.7.3.90 (from torch<3.0,>=2.3->stable-baselines3)
|
| 60 |
+
Downloading nvidia_cusolver_cu12-11.7.3.90-py3-none-manylinux_2_27_x86_64.whl.metadata (1.8 kB)
|
| 61 |
+
Collecting nvidia-cusparse-cu12==12.5.8.93 (from torch<3.0,>=2.3->stable-baselines3)
|
| 62 |
+
Downloading nvidia_cusparse_cu12-12.5.8.93-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.8 kB)
|
| 63 |
+
Collecting nvidia-cusparselt-cu12==0.7.1 (from torch<3.0,>=2.3->stable-baselines3)
|
| 64 |
+
Downloading nvidia_cusparselt_cu12-0.7.1-py3-none-manylinux2014_x86_64.whl.metadata (7.0 kB)
|
| 65 |
+
Collecting nvidia-nccl-cu12==2.27.5 (from torch<3.0,>=2.3->stable-baselines3)
|
| 66 |
+
Downloading nvidia_nccl_cu12-2.27.5-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.0 kB)
|
| 67 |
+
Collecting nvidia-nvshmem-cu12==3.3.20 (from torch<3.0,>=2.3->stable-baselines3)
|
| 68 |
+
Downloading nvidia_nvshmem_cu12-3.3.20-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.1 kB)
|
| 69 |
+
Collecting nvidia-nvtx-cu12==12.8.90 (from torch<3.0,>=2.3->stable-baselines3)
|
| 70 |
+
Downloading nvidia_nvtx_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.8 kB)
|
| 71 |
+
Collecting nvidia-nvjitlink-cu12==12.8.93 (from torch<3.0,>=2.3->stable-baselines3)
|
| 72 |
+
Downloading nvidia_nvjitlink_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.7 kB)
|
| 73 |
+
Collecting nvidia-cufile-cu12==1.13.1.3 (from torch<3.0,>=2.3->stable-baselines3)
|
| 74 |
+
Downloading nvidia_cufile_cu12-1.13.1.3-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.7 kB)
|
| 75 |
+
Collecting triton==3.5.1 (from torch<3.0,>=2.3->stable-baselines3)
|
| 76 |
+
Downloading triton-3.5.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (1.7 kB)
|
| 77 |
+
Collecting markdown-it-py>=2.2.0 (from rich)
|
| 78 |
+
Downloading markdown_it_py-4.0.0-py3-none-any.whl.metadata (7.3 kB)
|
| 79 |
+
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/lib/python3.14/site-packages (from rich) (2.19.1)
|
| 80 |
+
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich)
|
| 81 |
+
Downloading mdurl-0.1.2-py3-none-any.whl.metadata (1.6 kB)
|
| 82 |
+
Collecting mpmath<1.4,>=1.1.0 (from sympy>=1.13.3->torch<3.0,>=2.3->stable-baselines3)
|
| 83 |
+
Downloading mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
|
| 84 |
+
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib64/python3.14/site-packages (from jinja2->torch<3.0,>=2.3->stable-baselines3) (3.0.2)
|
| 85 |
+
Collecting contourpy>=1.0.1 (from matplotlib->stable-baselines3)
|
| 86 |
+
Downloading contourpy-1.3.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (5.5 kB)
|
| 87 |
+
Collecting cycler>=0.10 (from matplotlib->stable-baselines3)
|
| 88 |
+
Using cached cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
|
| 89 |
+
Collecting fonttools>=4.22.0 (from matplotlib->stable-baselines3)
|
| 90 |
+
Downloading fonttools-4.61.0-cp314-cp314-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl.metadata (113 kB)
|
| 91 |
+
Collecting kiwisolver>=1.3.1 (from matplotlib->stable-baselines3)
|
| 92 |
+
Downloading kiwisolver-1.4.9-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (6.3 kB)
|
| 93 |
+
Requirement already satisfied: pillow>=8 in /usr/lib64/python3.14/site-packages (from matplotlib->stable-baselines3) (11.3.0)
|
| 94 |
+
Requirement already satisfied: pyparsing>=3 in /usr/lib/python3.14/site-packages (from matplotlib->stable-baselines3) (3.1.2)
|
| 95 |
+
Requirement already satisfied: python-dateutil>=2.7 in /usr/lib/python3.14/site-packages (from matplotlib->stable-baselines3) (2.9.0.post0)
|
| 96 |
+
Requirement already satisfied: six>=1.5 in /usr/lib/python3.14/site-packages (from python-dateutil>=2.7->matplotlib->stable-baselines3) (1.17.0)
|
| 97 |
+
Collecting pytz>=2020.1 (from pandas->stable-baselines3)
|
| 98 |
+
Downloading pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
|
| 99 |
+
Collecting tzdata>=2022.7 (from pandas->stable-baselines3)
|
| 100 |
+
Downloading tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB)
|
| 101 |
+
Downloading gymnasium-1.2.2-py3-none-any.whl (952 kB)
|
| 102 |
+
━━ 9… 1… eta 0:0…
|
| 103 |
+
kB M…
|
| 104 |
+
Downloading opensimplex-0.4.5.1-py3-none-any.whl (267 kB)
|
| 105 |
+
Downloading ncps-1.0.1-py3-none-any.whl (60 kB)
|
| 106 |
+
Downloading stable_baselines3-2.7.0-py3-none-any.whl (187 kB)
|
| 107 |
+
Downloading torch-2.9.1-cp314-cp314-manylinux_2_28_x86_64.whl (899.7 MB)
|
| 108 |
+
━━ 8… 2… eta 0:0…
|
| 109 |
+
MB M…
|
| 110 |
+
Downloading nvidia_cublas_cu12-12.8.4.1-py3-none-manylinux_2_27_x86_64.whl (594.3 MB)
|
| 111 |
+
━━ 5… 2… eta 0:0…
|
| 112 |
+
MB M…
|
| 113 |
+
Downloading nvidia_cuda_cupti_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (10.2 MB)
|
| 114 |
+
━━ 1… 3… eta 0:0…
|
| 115 |
+
MB M…
|
| 116 |
+
Downloading nvidia_cuda_nvrtc_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (88.0 MB)
|
| 117 |
+
━━ 8… 3… eta 0:0…
|
| 118 |
+
MB M…
|
| 119 |
+
Downloading nvidia_cuda_runtime_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (954 kB)
|
| 120 |
+
━━ 9… 2… eta 0:0…
|
| 121 |
+
kB M…
|
| 122 |
+
Downloading nvidia_cudnn_cu12-9.10.2.21-py3-none-manylinux_2_27_x86_64.whl (706.8 MB)
|
| 123 |
+
━━ 7… 2… eta 0:0…
|
| 124 |
+
MB M…
|
| 125 |
+
Downloading nvidia_cufft_cu12-11.3.3.83-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (193.1 MB)
|
| 126 |
+
━━ 1… 1… eta 0:0…
|
| 127 |
+
MB M…
|
| 128 |
+
Downloading nvidia_cufile_cu12-1.13.1.3-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (1.2 MB)
|
| 129 |
+
━━ 1… 7… eta 0:0…
|
| 130 |
+
MB M…
|
| 131 |
+
Downloading nvidia_curand_cu12-10.3.9.90-py3-none-manylinux_2_27_x86_64.whl (63.6 MB)
|
| 132 |
+
━━ 6… 1… eta 0:0…
|
| 133 |
+
MB M…
|
| 134 |
+
Downloading nvidia_cusolver_cu12-11.7.3.90-py3-none-manylinux_2_27_x86_64.whl (267.5 MB)
|
| 135 |
+
━━ 2… 1… eta 0:0…
|
| 136 |
+
MB M…
|
| 137 |
+
Downloading nvidia_cusparse_cu12-12.5.8.93-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (288.2 MB)
|
| 138 |
+
━━ 2… 1… eta 0:0…
|
| 139 |
+
MB M…
|
| 140 |
+
Downloading nvidia_cusparselt_cu12-0.7.1-py3-none-manylinux2014_x86_64.whl (287.2 MB)
|
| 141 |
+
━━ 2… 2… eta 0:0…
|
| 142 |
+
MB M…
|
| 143 |
+
Downloading nvidia_nccl_cu12-2.27.5-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (322.3 MB)
|
| 144 |
+
━━ 3… 1… eta 0:0…
|
| 145 |
+
MB M…
|
| 146 |
+
Downloading nvidia_nvjitlink_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (39.3 MB)
|
| 147 |
+
━━ 3… 1… eta 0:0…
|
| 148 |
+
MB M…
|
| 149 |
+
Downloading nvidia_nvshmem_cu12-3.3.20-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (124.7 MB)
|
| 150 |
+
━━ 1… 1… eta 0:0…
|
| 151 |
+
MB M…
|
| 152 |
+
Downloading nvidia_nvtx_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89 kB)
|
| 153 |
+
Downloading triton-3.5.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (170.5 MB)
|
| 154 |
+
━━ 1… 3… eta 0:0…
|
| 155 |
+
MB M…
|
| 156 |
+
Downloading rich-14.2.0-py3-none-any.whl (243 kB)
|
| 157 |
+
Downloading tqdm-4.67.1-py3-none-any.whl (78 kB)
|
| 158 |
+
Downloading cloudpickle-3.1.2-py3-none-any.whl (22 kB)
|
| 159 |
+
Downloading Farama_Notifications-0.0.4-py3-none-any.whl (2.5 kB)
|
| 160 |
+
Downloading fsspec-2025.10.0-py3-none-any.whl (200 kB)
|
| 161 |
+
Downloading markdown_it_py-4.0.0-py3-none-any.whl (87 kB)
|
| 162 |
+
Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)
|
| 163 |
+
Downloading networkx-3.6-py3-none-any.whl (2.1 MB)
|
| 164 |
+
━━ 2… 3… eta 0:0…
|
| 165 |
+
MB M…
|
| 166 |
+
Downloading sympy-1.14.0-py3-none-any.whl (6.3 MB)
|
| 167 |
+
━━ 6… 3… eta 0:0…
|
| 168 |
+
MB M…
|
| 169 |
+
Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)
|
| 170 |
+
━━ 5… 1… eta 0:0…
|
| 171 |
+
kB M…
|
| 172 |
+
Downloading filelock-3.20.0-py3-none-any.whl (16 kB)
|
| 173 |
+
Downloading future-1.0.0-py3-none-any.whl (491 kB)
|
| 174 |
+
Downloading matplotlib-3.10.7-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (9.8 MB)
|
| 175 |
+
━━ 9… 3… eta 0:0…
|
| 176 |
+
MB M…
|
| 177 |
+
Downloading contourpy-1.3.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (363 kB)
|
| 178 |
+
Using cached cycler-0.12.1-py3-none-any.whl (8.3 kB)
|
| 179 |
+
Downloading fonttools-4.61.0-cp314-cp314-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl (4.9 MB)
|
| 180 |
+
━━ 4… 3… eta 0:0…
|
| 181 |
+
MB M…
|
| 182 |
+
Downloading kiwisolver-1.4.9-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (1.5 MB)
|
| 183 |
+
━━ 1… 2… eta 0:0…
|
| 184 |
+
MB M…
|
| 185 |
+
Downloading pandas-2.3.3-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (12.3 MB)
|
| 186 |
+
━━ 1… 3… eta 0:0…
|
| 187 |
+
MB M…
|
| 188 |
+
Downloading pytz-2025.2-py2.py3-none-any.whl (509 kB)
|
| 189 |
+
Downloading tzdata-2025.2-py2.py3-none-any.whl (347 kB)
|
| 190 |
+
Installing collected packages: pytz, nvidia-cusparselt-cu12, mpmath, farama-notifications, tzdata, triton, tqdm, sympy, opensimplex, nvidia-nvtx-cu12, nvidia-nvshmem-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufile-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, networkx, mdurl, kiwisolver, future, fsspec, fonttools, filelock, cycler, contourpy, cloudpickle, pandas, nvidia-cusparse-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, ncps, matplotlib, markdown-it-py, gymnasium, rich, nvidia-cusolver-cu12, torch, stable-baselines3
|
| 191 |
+
|
| 192 |
+
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
|
| 193 |
+
pyflyt 0.29.0 requires numba, which is not installed.
|
| 194 |
+
pyflyt 0.29.0 requires pettingzoo, which is not installed.
|
| 195 |
+
pyflyt 0.29.0 requires pybullet, which is not installed.
|
| 196 |
+
pyflyt 0.29.0 requires numpy<2.0.0, but you have numpy 2.3.5 which is incompatible.
|
| 197 |
+
Successfully installed cloudpickle-3.1.2 contourpy-1.3.3 cycler-0.12.1 farama-notifications-0.0.4 filelock-3.20.0 fonttools-4.61.0 fsspec-2025.10.0 future-1.0.0 gymnasium-1.2.2 kiwisolver-1.4.9 markdown-it-py-4.0.0 matplotlib-3.10.7 mdurl-0.1.2 mpmath-1.3.0 ncps-1.0.1 networkx-3.6 nvidia-cublas-cu12-12.8.4.1 nvidia-cuda-cupti-cu12-12.8.90 nvidia-cuda-nvrtc-cu12-12.8.93 nvidia-cuda-runtime-cu12-12.8.90 nvidia-cudnn-cu12-9.10.2.21 nvidia-cufft-cu12-11.3.3.83 nvidia-cufile-cu12-1.13.1.3 nvidia-curand-cu12-10.3.9.90 nvidia-cusolver-cu12-11.7.3.90 nvidia-cusparse-cu12-12.5.8.93 nvidia-cusparselt-cu12-0.7.1 nvidia-nccl-cu12-2.27.5 nvidia-nvjitlink-cu12-12.8.93 nvidia-nvshmem-cu12-3.3.20 nvidia-nvtx-cu12-12.8.90 opensimplex-0.4.5.1 pandas-2.3.3 pytz-2025.2 rich-14.2.0 stable-baselines3-2.7.0 sympy-1.14.0 torch-2.9.1 tqdm-4.67.1 triton-3.5.1 tzdata-2025.2
|
interactive_log.txt
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Initializing Interactive Dashboard...
|
| 2 |
+
No trained model found. Using untrained Liquid Brain.
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
|
| 7 |
+
=== DASHBOARD LIVE ===
|
| 8 |
+
Close the window to exit.
|
main.py
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Main entry point for the Drone RL project.
|
| 3 |
+
Tests imports and basic functionality.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import sys
|
| 7 |
+
import numpy as np
|
| 8 |
+
|
| 9 |
+
print("=" * 50)
|
| 10 |
+
print("Drone RL Project - Import Test")
|
| 11 |
+
print("=" * 50)
|
| 12 |
+
|
| 13 |
+
# Test PyTorch
|
| 14 |
+
try:
|
| 15 |
+
import torch
|
| 16 |
+
print(f"✓ PyTorch {torch.__version__} imported successfully")
|
| 17 |
+
print(f" CUDA available: {torch.cuda.is_available()}")
|
| 18 |
+
except ImportError as e:
|
| 19 |
+
print(f"✗ PyTorch import failed: {e}")
|
| 20 |
+
sys.exit(1)
|
| 21 |
+
|
| 22 |
+
# Test NumPy
|
| 23 |
+
try:
|
| 24 |
+
print(f"✓ NumPy {np.__version__} imported successfully")
|
| 25 |
+
except ImportError as e:
|
| 26 |
+
print(f"✗ NumPy import failed: {e}")
|
| 27 |
+
sys.exit(1)
|
| 28 |
+
|
| 29 |
+
# Test Gymnasium
|
| 30 |
+
try:
|
| 31 |
+
import gymnasium as gym
|
| 32 |
+
print(f"✓ Gymnasium {gym.__version__} imported successfully")
|
| 33 |
+
|
| 34 |
+
# Test creating a simple environment
|
| 35 |
+
try:
|
| 36 |
+
env = gym.make("CartPole-v1")
|
| 37 |
+
print(f"✓ Gymnasium environment 'CartPole-v1' created successfully")
|
| 38 |
+
env.close()
|
| 39 |
+
except Exception as e:
|
| 40 |
+
print(f"⚠ Could not create test environment: {e}")
|
| 41 |
+
except ImportError as e:
|
| 42 |
+
print(f"✗ Gymnasium import failed: {e}")
|
| 43 |
+
sys.exit(1)
|
| 44 |
+
|
| 45 |
+
# Test Stable-Baselines3
|
| 46 |
+
try:
|
| 47 |
+
import stable_baselines3
|
| 48 |
+
print(f"✓ Stable-Baselines3 imported successfully")
|
| 49 |
+
except ImportError as e:
|
| 50 |
+
print(f"⚠ Stable-Baselines3 import failed: {e}")
|
| 51 |
+
print(" (This is optional, you can use cleanrl instead)")
|
| 52 |
+
|
| 53 |
+
# Test Pygame
|
| 54 |
+
pygame_works = False
|
| 55 |
+
try:
|
| 56 |
+
import pygame
|
| 57 |
+
print(f"✓ Pygame {pygame.__version__} imported successfully")
|
| 58 |
+
|
| 59 |
+
# Try to initialize pygame
|
| 60 |
+
try:
|
| 61 |
+
pygame.init()
|
| 62 |
+
print("✓ Pygame initialized successfully")
|
| 63 |
+
|
| 64 |
+
# Create a test window
|
| 65 |
+
screen = pygame.display.set_mode((640, 480))
|
| 66 |
+
pygame.display.set_caption("Drone RL - Pygame Test")
|
| 67 |
+
print("✓ Pygame window created successfully")
|
| 68 |
+
print(" Window should be visible. Close it to continue...")
|
| 69 |
+
|
| 70 |
+
# Keep window open briefly, then close
|
| 71 |
+
import time
|
| 72 |
+
clock = pygame.time.Clock()
|
| 73 |
+
running = True
|
| 74 |
+
start_time = time.time()
|
| 75 |
+
|
| 76 |
+
while running and (time.time() - start_time) < 2.0: # Show for 2 seconds
|
| 77 |
+
for event in pygame.event.get():
|
| 78 |
+
if event.type == pygame.QUIT:
|
| 79 |
+
running = False
|
| 80 |
+
|
| 81 |
+
screen.fill((50, 50, 50)) # Dark gray background
|
| 82 |
+
font = pygame.font.Font(None, 36)
|
| 83 |
+
text = font.render("Pygame Works!", True, (255, 255, 255))
|
| 84 |
+
text_rect = text.get_rect(center=(320, 240))
|
| 85 |
+
screen.blit(text, text_rect)
|
| 86 |
+
pygame.display.flip()
|
| 87 |
+
clock.tick(60)
|
| 88 |
+
|
| 89 |
+
pygame.quit()
|
| 90 |
+
pygame_works = True
|
| 91 |
+
print("✓ Pygame window test completed successfully")
|
| 92 |
+
|
| 93 |
+
except Exception as e:
|
| 94 |
+
print(f"⚠ Pygame initialization failed: {e}")
|
| 95 |
+
pygame.quit()
|
| 96 |
+
|
| 97 |
+
except ImportError as e:
|
| 98 |
+
print(f"✗ Pygame import failed: {e}")
|
| 99 |
+
|
| 100 |
+
# Fallback to matplotlib if pygame failed
|
| 101 |
+
if not pygame_works:
|
| 102 |
+
print("\n" + "=" * 50)
|
| 103 |
+
print("Falling back to matplotlib animation...")
|
| 104 |
+
print("=" * 50)
|
| 105 |
+
try:
|
| 106 |
+
import matplotlib.pyplot as plt
|
| 107 |
+
import matplotlib.animation as animation
|
| 108 |
+
|
| 109 |
+
print("✓ Matplotlib imported successfully")
|
| 110 |
+
|
| 111 |
+
# Create a simple animation
|
| 112 |
+
fig, ax = plt.subplots()
|
| 113 |
+
ax.set_xlim(0, 10)
|
| 114 |
+
ax.set_ylim(0, 10)
|
| 115 |
+
ax.set_title("Drone RL - Matplotlib Test")
|
| 116 |
+
|
| 117 |
+
line, = ax.plot([], [], 'o-', lw=2)
|
| 118 |
+
|
| 119 |
+
def animate(frame):
|
| 120 |
+
x = np.linspace(0, 10, 100)
|
| 121 |
+
y = np.sin(x + frame * 0.1) * 5 + 5
|
| 122 |
+
line.set_data(x, y)
|
| 123 |
+
return line,
|
| 124 |
+
|
| 125 |
+
anim = animation.FuncAnimation(fig, animate, frames=100, interval=50, blit=True)
|
| 126 |
+
print("✓ Matplotlib animation created")
|
| 127 |
+
print(" Animation window should be visible. Close it to continue...")
|
| 128 |
+
plt.show(block=False)
|
| 129 |
+
|
| 130 |
+
# Keep it open briefly
|
| 131 |
+
import time
|
| 132 |
+
time.sleep(2)
|
| 133 |
+
plt.close()
|
| 134 |
+
|
| 135 |
+
print("✓ Matplotlib animation test completed successfully")
|
| 136 |
+
|
| 137 |
+
except ImportError as e:
|
| 138 |
+
print(f"✗ Matplotlib import failed: {e}")
|
| 139 |
+
print(" Please install matplotlib: pip install matplotlib")
|
| 140 |
+
|
| 141 |
+
print("\n" + "=" * 50)
|
| 142 |
+
print("All tests completed!")
|
| 143 |
+
print("=" * 50)
|
| 144 |
+
|
models/__init__.py
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Model definitions and architectures."""
|
| 2 |
+
|
models/liquid_cell.py
ADDED
|
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Liquid Neural Network Cell - Discrete-time approximation of continuous-time dynamics.
|
| 3 |
+
|
| 4 |
+
Implements a liquid cell with learnable per-neuron time constants.
|
| 5 |
+
The cell updates hidden state using a differential equation approximation.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
import torch
|
| 9 |
+
import torch.nn as nn
|
| 10 |
+
import torch.nn.functional as F
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
class LiquidCell(nn.Module):
|
| 14 |
+
"""
|
| 15 |
+
A discrete-time liquid neural network cell.
|
| 16 |
+
|
| 17 |
+
Hidden state update rule:
|
| 18 |
+
h_{t+1,i} = h_{t,i} + dt / tau_i * ( tanh( W_hh[i]·h_t + W_xh[i]·x_t + b[i] ) - h_{t,i} )
|
| 19 |
+
|
| 20 |
+
where tau_i is a learnable per-neuron time constant.
|
| 21 |
+
|
| 22 |
+
Args:
|
| 23 |
+
hidden_size: Number of hidden neurons
|
| 24 |
+
input_size: Size of input vector
|
| 25 |
+
dt: Time step for discrete approximation (default: 0.1)
|
| 26 |
+
"""
|
| 27 |
+
|
| 28 |
+
def __init__(self, hidden_size: int, input_size: int, dt: float = 0.1):
|
| 29 |
+
super().__init__()
|
| 30 |
+
self.hidden_size = hidden_size
|
| 31 |
+
self.input_size = input_size
|
| 32 |
+
self.dt = dt
|
| 33 |
+
|
| 34 |
+
# Recurrent weight matrix: (hidden_size, hidden_size)
|
| 35 |
+
self.W_hh = nn.Parameter(torch.randn(hidden_size, hidden_size) * 0.1)
|
| 36 |
+
|
| 37 |
+
# Input weight matrix: (hidden_size, input_size)
|
| 38 |
+
self.W_xh = nn.Parameter(torch.randn(hidden_size, input_size) * 0.1)
|
| 39 |
+
|
| 40 |
+
# Bias vector: (hidden_size,)
|
| 41 |
+
self.b = nn.Parameter(torch.zeros(hidden_size))
|
| 42 |
+
|
| 43 |
+
# Raw time constants (will be transformed to positive values)
|
| 44 |
+
# Shape: (hidden_size,)
|
| 45 |
+
self.tau_raw = nn.Parameter(torch.ones(hidden_size))
|
| 46 |
+
|
| 47 |
+
def forward(self, h: torch.Tensor, x: torch.Tensor) -> torch.Tensor:
|
| 48 |
+
"""
|
| 49 |
+
Forward pass through the liquid cell.
|
| 50 |
+
|
| 51 |
+
Args:
|
| 52 |
+
h: Hidden state tensor of shape (batch, hidden_size)
|
| 53 |
+
x: Input tensor of shape (batch, input_size)
|
| 54 |
+
|
| 55 |
+
Returns:
|
| 56 |
+
Next hidden state tensor of shape (batch, hidden_size)
|
| 57 |
+
"""
|
| 58 |
+
# Compute time constants: tau = softplus(tau_raw) + 1e-3
|
| 59 |
+
# This ensures tau is always positive
|
| 60 |
+
tau = F.softplus(self.tau_raw) + 1e-3
|
| 61 |
+
|
| 62 |
+
# Compute preactivation:
|
| 63 |
+
# preact = tanh( W_hh @ h^T + W_xh @ x^T + b )
|
| 64 |
+
# Using batch matrix multiplication
|
| 65 |
+
|
| 66 |
+
# W_hh @ h^T: (hidden_size, hidden_size) @ (hidden_size, batch) -> (hidden_size, batch)
|
| 67 |
+
# Then transpose to (batch, hidden_size)
|
| 68 |
+
h_proj = torch.matmul(h, self.W_hh.t()) # (batch, hidden_size)
|
| 69 |
+
|
| 70 |
+
# W_xh @ x^T: (hidden_size, input_size) @ (input_size, batch) -> (hidden_size, batch)
|
| 71 |
+
# Then transpose to (batch, hidden_size)
|
| 72 |
+
x_proj = torch.matmul(x, self.W_xh.t()) # (batch, hidden_size)
|
| 73 |
+
|
| 74 |
+
# Add bias and apply tanh
|
| 75 |
+
preact = torch.tanh(h_proj + x_proj + self.b) # (batch, hidden_size)
|
| 76 |
+
|
| 77 |
+
# Update hidden state:
|
| 78 |
+
# h_next = h + dt * (preact - h) / tau
|
| 79 |
+
# tau is (hidden_size,), so we need to broadcast
|
| 80 |
+
h_next = h + self.dt * (preact - h) / tau.unsqueeze(0) # (batch, hidden_size)
|
| 81 |
+
|
| 82 |
+
# Clamp to reasonable range for stability
|
| 83 |
+
h_next = torch.clamp(h_next, -5.0, 5.0)
|
| 84 |
+
|
| 85 |
+
return h_next
|
| 86 |
+
|
models/liquid_policy.py
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Liquid Neural Network Policy for Stable-Baselines3.
|
| 3 |
+
|
| 4 |
+
Implements a custom feature extractor using LiquidCell that can be used
|
| 5 |
+
with PPO and other SB3 algorithms.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
import torch
|
| 9 |
+
import torch.nn as nn
|
| 10 |
+
import gymnasium as gym
|
| 11 |
+
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor
|
| 12 |
+
|
| 13 |
+
from models.liquid_cell import LiquidCell
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
class LiquidFeatureExtractor(BaseFeaturesExtractor):
|
| 17 |
+
"""
|
| 18 |
+
Feature extractor using a Liquid Neural Network cell.
|
| 19 |
+
|
| 20 |
+
This extractor processes observations through a liquid cell to produce
|
| 21 |
+
rich temporal features suitable for policy/value networks.
|
| 22 |
+
|
| 23 |
+
Args:
|
| 24 |
+
observation_space: Gymnasium observation space
|
| 25 |
+
features_dim: Output feature dimension (default: 32)
|
| 26 |
+
hidden_size: Number of hidden neurons in liquid cell (default: 32)
|
| 27 |
+
dt: Time step for liquid cell (default: 0.1)
|
| 28 |
+
"""
|
| 29 |
+
|
| 30 |
+
def __init__(
|
| 31 |
+
self,
|
| 32 |
+
observation_space: gym.Space,
|
| 33 |
+
features_dim: int = 32,
|
| 34 |
+
hidden_size: int = 32,
|
| 35 |
+
dt: float = 0.1,
|
| 36 |
+
):
|
| 37 |
+
super().__init__(observation_space, features_dim)
|
| 38 |
+
|
| 39 |
+
# Get observation dimension
|
| 40 |
+
if isinstance(observation_space, gym.spaces.Box):
|
| 41 |
+
obs_dim = observation_space.shape[0]
|
| 42 |
+
else:
|
| 43 |
+
raise ValueError(f"Unsupported observation space: {observation_space}")
|
| 44 |
+
|
| 45 |
+
self.hidden_size = hidden_size
|
| 46 |
+
self.dt = dt
|
| 47 |
+
|
| 48 |
+
# Input projection layer: maps observation to hidden space
|
| 49 |
+
self.input_layer = nn.Linear(obs_dim, hidden_size)
|
| 50 |
+
|
| 51 |
+
# Liquid cell: processes hidden state
|
| 52 |
+
self.liquid_cell = LiquidCell(hidden_size, hidden_size, dt)
|
| 53 |
+
|
| 54 |
+
# Output projection: maps liquid cell output to feature dimension
|
| 55 |
+
self.output_layer = nn.Linear(hidden_size, features_dim)
|
| 56 |
+
|
| 57 |
+
def forward(self, observations: torch.Tensor) -> torch.Tensor:
|
| 58 |
+
"""
|
| 59 |
+
Forward pass through the liquid feature extractor.
|
| 60 |
+
|
| 61 |
+
Args:
|
| 62 |
+
observations: Input tensor of shape (batch, obs_dim)
|
| 63 |
+
|
| 64 |
+
Returns:
|
| 65 |
+
Feature tensor of shape (batch, features_dim)
|
| 66 |
+
"""
|
| 67 |
+
# Project input to hidden space and apply tanh
|
| 68 |
+
x = torch.tanh(self.input_layer(observations)) # (batch, hidden_size)
|
| 69 |
+
|
| 70 |
+
# Initialize hidden state from input
|
| 71 |
+
h = x
|
| 72 |
+
|
| 73 |
+
# Apply one liquid cell step
|
| 74 |
+
# The liquid cell uses both the hidden state and the input
|
| 75 |
+
h = self.liquid_cell(h, x) # (batch, hidden_size)
|
| 76 |
+
|
| 77 |
+
# Project to output feature dimension
|
| 78 |
+
features = self.output_layer(h) # (batch, features_dim)
|
| 79 |
+
|
| 80 |
+
return features
|
| 81 |
+
|
models/liquid_ppo.py
ADDED
|
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import torch.nn as nn
|
| 3 |
+
import gymnasium as gym
|
| 4 |
+
from stable_baselines3 import PPO
|
| 5 |
+
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor
|
| 6 |
+
from stable_baselines3.common.env_util import make_vec_env
|
| 7 |
+
from stable_baselines3.common.vec_env import SubprocVecEnv
|
| 8 |
+
from ncps.torch import LTC
|
| 9 |
+
from ncps.wirings import AutoNCP
|
| 10 |
+
from env.drone_3d import Drone3DEnv
|
| 11 |
+
|
| 12 |
+
class LTCFeatureExtractor(BaseFeaturesExtractor):
|
| 13 |
+
"""
|
| 14 |
+
Custom Feature Extractor using Liquid Time-Constant (LTC) Cells.
|
| 15 |
+
This allows the agent to handle irregular time-steps and stiff dynamics better than standard MLPs or LSTMs.
|
| 16 |
+
"""
|
| 17 |
+
def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 32):
|
| 18 |
+
super().__init__(observation_space, features_dim)
|
| 19 |
+
|
| 20 |
+
input_size = observation_space.shape[0]
|
| 21 |
+
# self.features_dim is already set by super().__init__
|
| 22 |
+
|
| 23 |
+
# Neural Circuit Policy (NCP) wiring for structured connectivity
|
| 24 |
+
# We use a small wiring to keep inference fast (< 10ms)
|
| 25 |
+
# AutoNCP requires units > output_size. Let's use 48 units for 32 outputs.
|
| 26 |
+
wiring = AutoNCP(48, output_size=features_dim)
|
| 27 |
+
|
| 28 |
+
self.ltc = LTC(input_size, wiring, batch_first=True)
|
| 29 |
+
|
| 30 |
+
# Hidden state for the LTC
|
| 31 |
+
self.hx = None
|
| 32 |
+
|
| 33 |
+
def forward(self, observations: torch.Tensor) -> torch.Tensor:
|
| 34 |
+
# LTC expects (batch, time, features)
|
| 35 |
+
# SB3 provides (batch, features), so we add a time dimension
|
| 36 |
+
if observations.dim() == 2:
|
| 37 |
+
observations = observations.unsqueeze(1)
|
| 38 |
+
|
| 39 |
+
# Initialize hidden state if needed or if batch size changes
|
| 40 |
+
batch_size = observations.size(0)
|
| 41 |
+
if self.hx is None or self.hx.size(0) != batch_size:
|
| 42 |
+
self.hx = torch.zeros(batch_size, self.ltc.state_size, device=observations.device)
|
| 43 |
+
|
| 44 |
+
# Forward pass through LTC
|
| 45 |
+
# Note: In a real recurrent setting with SB3, we'd need to manage hidden states
|
| 46 |
+
# more carefully (e.g. using RecurrentPPO from sb3-contrib).
|
| 47 |
+
# For this demo, we are using a simplified approach where we treat the LTC
|
| 48 |
+
# as a stateful feature extractor that maintains state between calls within a batch.
|
| 49 |
+
# However, standard PPO assumes stateless policies.
|
| 50 |
+
# To make this truly "Liquid" in a standard PPO loop without sb3-contrib,
|
| 51 |
+
# we approximate by running the LTC on the current step.
|
| 52 |
+
# A better approach for production would be RecurrentPPO.
|
| 53 |
+
# Given the constraints and the goal of a "demo", we will use the LTC
|
| 54 |
+
# but reset state if we detect a new episode (which is hard here).
|
| 55 |
+
# So we will let the LTC evolve.
|
| 56 |
+
|
| 57 |
+
# Detach hidden state from previous graph to prevent "backward through graph a second time" error
|
| 58 |
+
if self.hx is not None:
|
| 59 |
+
self.hx = self.hx.detach()
|
| 60 |
+
|
| 61 |
+
output, self.hx = self.ltc(observations, self.hx)
|
| 62 |
+
|
| 63 |
+
# Remove time dimension
|
| 64 |
+
return output.squeeze(1)
|
| 65 |
+
|
| 66 |
+
def make_liquid_ppo(env, verbose=1):
|
| 67 |
+
"""
|
| 68 |
+
Factory function to create a PPO agent with Liquid Brain.
|
| 69 |
+
"""
|
| 70 |
+
# Parallel Environments for High-Performance Training
|
| 71 |
+
# A100/A10G are data hungry. We need to run physics on many CPU cores to feed them.
|
| 72 |
+
# We will use 16 parallel environments.
|
| 73 |
+
n_envs = 16
|
| 74 |
+
env = make_vec_env(
|
| 75 |
+
lambda: Drone3DEnv(render_mode=None, wind_scale=10.0, wind_speed=5.0),
|
| 76 |
+
n_envs=n_envs,
|
| 77 |
+
vec_env_cls=SubprocVecEnv
|
| 78 |
+
)
|
| 79 |
+
|
| 80 |
+
# Create Model with optimized hyperparameters for A100
|
| 81 |
+
policy_kwargs = dict(
|
| 82 |
+
features_extractor_class=LTCFeatureExtractor,
|
| 83 |
+
features_extractor_kwargs=dict(features_dim=32),
|
| 84 |
+
)
|
| 85 |
+
|
| 86 |
+
model = PPO(
|
| 87 |
+
"MlpPolicy",
|
| 88 |
+
env,
|
| 89 |
+
policy_kwargs=policy_kwargs,
|
| 90 |
+
verbose=1,
|
| 91 |
+
learning_rate=3e-4,
|
| 92 |
+
n_steps=2048, # 2048 * 16 = 32,768 steps per update
|
| 93 |
+
batch_size=4096, # Large batch size for A100
|
| 94 |
+
n_epochs=10,
|
| 95 |
+
gamma=0.99,
|
| 96 |
+
gae_lambda=0.95,
|
| 97 |
+
clip_range=0.2,
|
| 98 |
+
device="cuda" # Force CUDA
|
| 99 |
+
)
|
| 100 |
+
return model
|
output.txt
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Python version: 3.14.0 (main, Oct 17 2025, 00:00:00) [GCC 15.2.1 20251022 (Red Hat 15.2.1-3)]
|
| 2 |
+
Numpy imported
|
| 3 |
+
Gymnasium failed: No module named 'gymnasium'
|
| 4 |
+
PyFlyt failed: No module named 'PyFlyt'
|
| 5 |
+
Opensimplex failed: No module named 'opensimplex'
|
| 6 |
+
Ncps failed: No module named 'ncps'
|
pip_log.txt
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Defaulting to user installation because normal site-packages is not writeable
|
| 2 |
+
Collecting torch>=2.0.0 (from -r requirements.txt (line 1))
|
| 3 |
+
Using cached torch-2.9.1-cp314-cp314-manylinux_2_28_x86_64.whl.metadata (30 kB)
|
| 4 |
+
Collecting gymnasium>=0.29.0 (from -r requirements.txt (line 2))
|
| 5 |
+
Using cached gymnasium-1.2.2-py3-none-any.whl.metadata (10 kB)
|
| 6 |
+
Collecting pygame>=2.5.0 (from -r requirements.txt (line 3))
|
| 7 |
+
Using cached pygame-2.6.1.tar.gz (14.8 MB)
|
| 8 |
+
Installing build dependencies: started
|
| 9 |
+
Installing build dependencies: finished with status 'done'
|
| 10 |
+
Getting requirements to build wheel: started
|
| 11 |
+
Getting requirements to build wheel: finished with status 'error'
|
| 12 |
+
error: subprocess-exited-with-error
|
| 13 |
+
|
| 14 |
+
× Getting requirements to build wheel did not run successfully.
|
| 15 |
+
│ exit code: 1
|
| 16 |
+
╰─> [67 lines of output]
|
| 17 |
+
Skipping Cython compilation
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
WARNING, No "Setup" File Exists, Running "buildconfig/config.py"
|
| 21 |
+
Using UNIX configuration...
|
| 22 |
+
|
| 23 |
+
/bin/sh: line 1: dpkg-architecture: command not found
|
| 24 |
+
/bin/sh: line 1: gcc: command not found
|
| 25 |
+
/bin/sh: line 1: gcc: command not found
|
| 26 |
+
/bin/sh: line 1: sdl2-config: command not found
|
| 27 |
+
/bin/sh: line 1: sdl2-config: command not found
|
| 28 |
+
/bin/sh: line 1: sdl2-config: command not found
|
| 29 |
+
Package freetype2 was not found in the pkg-config search path.
|
| 30 |
+
Perhaps you should add the directory containing `freetype2.pc'
|
| 31 |
+
to the PKG_CONFIG_PATH environment variable
|
| 32 |
+
Package 'freetype2' not found
|
| 33 |
+
Package freetype2 was not found in the pkg-config search path.
|
| 34 |
+
Perhaps you should add the directory containing `freetype2.pc'
|
| 35 |
+
to the PKG_CONFIG_PATH environment variable
|
| 36 |
+
Package 'freetype2' not found
|
| 37 |
+
Package freetype2 was not found in the pkg-config search path.
|
| 38 |
+
Perhaps you should add the directory containing `freetype2.pc'
|
| 39 |
+
to the PKG_CONFIG_PATH environment variable
|
| 40 |
+
Package 'freetype2' not found
|
| 41 |
+
/bin/sh: line 1: freetype-config: command not found
|
| 42 |
+
/bin/sh: line 1: freetype-config: command not found
|
| 43 |
+
/bin/sh: line 1: freetype-config: command not found
|
| 44 |
+
|
| 45 |
+
Hunting dependencies...
|
| 46 |
+
WARNING: "sdl2-config" failed!
|
| 47 |
+
WARNING: "pkg-config freetype2" failed!
|
| 48 |
+
WARNING: "freetype-config" failed!
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
For help with compilation see:
|
| 52 |
+
https://www.pygame.org/wiki/Compilation
|
| 53 |
+
To contribute to pygame development see:
|
| 54 |
+
https://www.pygame.org/contribute.html
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
Traceback (most recent call last):
|
| 58 |
+
File "/usr/lib/python3.14/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
|
| 59 |
+
main()
|
| 60 |
+
~~~~^^
|
| 61 |
+
File "/usr/lib/python3.14/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main
|
| 62 |
+
json_out["return_val"] = hook(**hook_input["kwargs"])
|
| 63 |
+
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
|
| 64 |
+
File "/usr/lib/python3.14/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 143, in get_requires_for_build_wheel
|
| 65 |
+
return hook(config_settings)
|
| 66 |
+
File "/tmp/pip-build-env-9_tn2xvg/overlay/lib/python3.14/site-packages/setuptools/build_meta.py", line 331, in get_requires_for_build_wheel
|
| 67 |
+
return self._get_build_requires(config_settings, requirements=[])
|
| 68 |
+
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 69 |
+
File "/tmp/pip-build-env-9_tn2xvg/overlay/lib/python3.14/site-packages/setuptools/build_meta.py", line 301, in _get_build_requires
|
| 70 |
+
self.run_setup()
|
| 71 |
+
~~~~~~~~~~~~~~^^
|
| 72 |
+
File "/tmp/pip-build-env-9_tn2xvg/overlay/lib/python3.14/site-packages/setuptools/build_meta.py", line 512, in run_setup
|
| 73 |
+
super().run_setup(setup_script=setup_script)
|
| 74 |
+
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 75 |
+
File "/tmp/pip-build-env-9_tn2xvg/overlay/lib/python3.14/site-packages/setuptools/build_meta.py", line 317, in run_setup
|
| 76 |
+
exec(code, locals())
|
| 77 |
+
~~~~^^^^^^^^^^^^^^^^
|
| 78 |
+
File "<string>", line 432, in <module>
|
| 79 |
+
File "/tmp/pip-install-tmizgydo/pygame_66dd0a904d484e059768cb5e32cd4cf6/buildconfig/config.py", line 234, in main
|
| 80 |
+
deps = CFG.main(**kwds, auto_config=auto)
|
| 81 |
+
File "/tmp/pip-install-tmizgydo/pygame_66dd0a904d484e059768cb5e32cd4cf6/buildconfig/config_unix.py", line 245, in main
|
| 82 |
+
raise RuntimeError('Unable to run "sdl-config". Please make sure a development version of SDL is installed.')
|
| 83 |
+
RuntimeError: Unable to run "sdl-config". Please make sure a development version of SDL is installed.
|
| 84 |
+
[end of output]
|
| 85 |
+
|
| 86 |
+
note: This error originates from a subprocess, and is likely not a problem with pip.
|
| 87 |
+
error: subprocess-exited-with-error
|
| 88 |
+
|
| 89 |
+
× Getting requirements to build wheel did not run successfully.
|
| 90 |
+
│ exit code: 1
|
| 91 |
+
╰─> See above for output.
|
| 92 |
+
|
| 93 |
+
note: This error originates from a subprocess, and is likely not a problem with pip.
|
pybullet_bin_log.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Defaulting to user installation because normal site-packages is not writeable
|
| 2 |
+
ERROR: Could not find a version that satisfies the requirement pybullet (from versions: none)
|
| 3 |
+
ERROR: No matching distribution found for pybullet
|
pybullet_log.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
pygame_log.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Defaulting to user installation because normal site-packages is not writeable
|
| 2 |
+
ERROR: Could not find a version that satisfies the requirement pygame (from versions: none)
|
| 3 |
+
ERROR: No matching distribution found for pygame
|
requirements.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
torch>=2.0.0
|
| 2 |
+
gymnasium>=0.29.0
|
| 3 |
+
pygame>=2.5.0
|
| 4 |
+
numpy>=1.24.0
|
| 5 |
+
stable-baselines3>=2.0.0
|
| 6 |
+
matplotlib>=3.7.0
|
| 7 |
+
tensorboard>=2.14.0
|
| 8 |
+
tqdm>=4.65.0
|
| 9 |
+
rich>=13.0.0
|
| 10 |
+
|
| 11 |
+
PyFlyt>=0.3.0
|
| 12 |
+
opensimplex>=0.4.4
|
| 13 |
+
ncps>=0.0.7
|
run_demo.sh
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
set -e
|
| 3 |
+
|
| 4 |
+
echo "=== Project Neuro-Flyt 3D Setup ==="
|
| 5 |
+
echo "Installing dependencies..."
|
| 6 |
+
pip install -r requirements.txt
|
| 7 |
+
|
| 8 |
+
echo "=== Launching Demo ==="
|
| 9 |
+
python demo_3d.py
|
setup.sh
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
# Setup script for Drone RL project
|
| 3 |
+
|
| 4 |
+
echo "Setting up Drone RL project..."
|
| 5 |
+
echo ""
|
| 6 |
+
|
| 7 |
+
# Check Python version
|
| 8 |
+
PYTHON_CMD="python3.10"
|
| 9 |
+
if ! command -v $PYTHON_CMD &> /dev/null; then
|
| 10 |
+
PYTHON_CMD="python3.11"
|
| 11 |
+
fi
|
| 12 |
+
|
| 13 |
+
if ! command -v $PYTHON_CMD &> /dev/null; then
|
| 14 |
+
echo "Error: Python 3.10+ not found. Please install Python 3.10 or higher."
|
| 15 |
+
exit 1
|
| 16 |
+
fi
|
| 17 |
+
|
| 18 |
+
echo "Using: $($PYTHON_CMD --version)"
|
| 19 |
+
echo ""
|
| 20 |
+
|
| 21 |
+
# Create virtual environment if it doesn't exist
|
| 22 |
+
if [ ! -d "venv" ]; then
|
| 23 |
+
echo "Creating virtual environment..."
|
| 24 |
+
$PYTHON_CMD -m venv venv
|
| 25 |
+
fi
|
| 26 |
+
|
| 27 |
+
# Activate virtual environment
|
| 28 |
+
echo "Activating virtual environment..."
|
| 29 |
+
source venv/bin/activate
|
| 30 |
+
|
| 31 |
+
# Upgrade pip
|
| 32 |
+
echo "Upgrading pip..."
|
| 33 |
+
pip install --upgrade pip
|
| 34 |
+
|
| 35 |
+
# Install requirements
|
| 36 |
+
echo "Installing requirements..."
|
| 37 |
+
pip install -r requirements.txt
|
| 38 |
+
|
| 39 |
+
echo ""
|
| 40 |
+
echo "Setup complete! To activate the environment, run:"
|
| 41 |
+
echo " source venv/bin/activate"
|
| 42 |
+
echo ""
|
| 43 |
+
echo "To test the setup, run:"
|
| 44 |
+
echo " python main.py"
|
| 45 |
+
|
test_log.txt
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Testing Wind Physics...
|
| 2 |
+
Traceback (most recent call last):
|
| 3 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/test_physics.py", line 41, in <module>
|
| 4 |
+
test_wind_physics()
|
| 5 |
+
~~~~~~~~~~~~~~~~~^^
|
| 6 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/test_physics.py", line 11, in test_wind_physics
|
| 7 |
+
obs, _, _, _, info = env.step(np.zeros(4)) # Hover action (approx)
|
| 8 |
+
~~~~~~~~^^^^^^^^^^^^^
|
| 9 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/env/drone_3d.py", line 66, in step
|
| 10 |
+
raw_wind = self.wind_field.get_wind(pos[0], pos[1], pos[2], self.dt)
|
| 11 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/env/drone_3d.py", line 27, in get_wind
|
| 12 |
+
u = self.noise.noise4d(x * self.scale, y * self.scale, z * self.scale, self.time_offset)
|
| 13 |
+
^^^^^^^^^^^^^^^^^^
|
| 14 |
+
AttributeError: 'OpenSimplex' object has no attribute 'noise4d'. Did you mean: 'noise4'?
|
test_physics.py
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import time
|
| 3 |
+
from env.drone_3d import Drone3DEnv
|
| 4 |
+
|
| 5 |
+
def test_wind_physics():
|
| 6 |
+
print("Testing Wind Physics...")
|
| 7 |
+
env = Drone3DEnv(render_mode=None, wind_scale=10.0, wind_speed=5.0)
|
| 8 |
+
env.reset()
|
| 9 |
+
|
| 10 |
+
# 1. Test Non-Zero Wind
|
| 11 |
+
obs, _, _, _, info = env.step(np.zeros(4)) # Hover action (approx)
|
| 12 |
+
wind_0 = info.get("wind", np.zeros(3))
|
| 13 |
+
target_0 = info.get("target", np.zeros(3))
|
| 14 |
+
print(f"Initial Wind Vector: {wind_0}")
|
| 15 |
+
print(f"Target Location: {target_0}")
|
| 16 |
+
|
| 17 |
+
if np.linalg.norm(wind_0) == 0:
|
| 18 |
+
print("WARNING: Wind vector is zero. Check noise generation.")
|
| 19 |
+
else:
|
| 20 |
+
print("SUCCESS: Wind vector is non-zero.")
|
| 21 |
+
|
| 22 |
+
# 2. Test Temporal Variation
|
| 23 |
+
print("Stepping environment to test temporal variation...")
|
| 24 |
+
winds = []
|
| 25 |
+
for _ in range(10):
|
| 26 |
+
_, _, _, _, info = env.step(np.zeros(4))
|
| 27 |
+
winds.append(info["wind"])
|
| 28 |
+
|
| 29 |
+
winds = np.array(winds)
|
| 30 |
+
# Check if wind changes
|
| 31 |
+
diffs = np.diff(winds, axis=0)
|
| 32 |
+
mean_diff = np.mean(np.abs(diffs))
|
| 33 |
+
print(f"Mean frame-to-frame wind change: {mean_diff:.4f}")
|
| 34 |
+
|
| 35 |
+
if mean_diff > 0:
|
| 36 |
+
print("SUCCESS: Wind varies over time.")
|
| 37 |
+
else:
|
| 38 |
+
print("FAILURE: Wind is static.")
|
| 39 |
+
|
| 40 |
+
print("Physics Test Complete.")
|
| 41 |
+
|
| 42 |
+
if __name__ == "__main__":
|
| 43 |
+
test_wind_physics()
|
test_random_1.txt
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Testing Wind Physics...
|
| 2 |
+
Initial Wind Vector: [ 2.32584927 -5.50170588 4.15986258]
|
| 3 |
+
Target Location: [-0.26455097 1.92920148 12.09442991]
|
| 4 |
+
SUCCESS: Wind vector is non-zero.
|
| 5 |
+
Stepping environment to test temporal variation...
|
| 6 |
+
Mean frame-to-frame wind change: 1.1686
|
| 7 |
+
SUCCESS: Wind varies over time.
|
| 8 |
+
Physics Test Complete.
|
test_random_2.txt
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Testing Wind Physics...
|
| 2 |
+
Initial Wind Vector: [2.32121281 1.08356691 0.06257384]
|
| 3 |
+
Target Location: [ 1.08129528 4.75833083 13.20958488]
|
| 4 |
+
SUCCESS: Wind vector is non-zero.
|
| 5 |
+
Stepping environment to test temporal variation...
|
| 6 |
+
Mean frame-to-frame wind change: 1.0202
|
| 7 |
+
SUCCESS: Wind varies over time.
|
| 8 |
+
Physics Test Complete.
|
test_result_final.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Testing Wind Physics...
|
| 2 |
+
Initial Wind Vector: [ 1.25934332 -0.34450611 4.63735668]
|
| 3 |
+
SUCCESS: Wind vector is non-zero.
|
| 4 |
+
Stepping environment to test temporal variation...
|
| 5 |
+
Mean frame-to-frame wind change: 1.0813
|
| 6 |
+
SUCCESS: Wind varies over time.
|
| 7 |
+
Physics Test Complete.
|
test_success.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Testing Wind Physics...
|
| 2 |
+
Initial Wind Vector: [ 1.25934332 -0.34450611 4.63735668]
|
| 3 |
+
SUCCESS: Wind vector is non-zero.
|
| 4 |
+
Stepping environment to test temporal variation...
|
| 5 |
+
Mean frame-to-frame wind change: 1.0813
|
| 6 |
+
SUCCESS: Wind varies over time.
|
| 7 |
+
Physics Test Complete.
|
train.py
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import gymnasium as gym
|
| 3 |
+
from stable_baselines3 import PPO
|
| 4 |
+
from stable_baselines3.common.callbacks import CheckpointCallback
|
| 5 |
+
from env.drone_3d import Drone3DEnv
|
| 6 |
+
from models.liquid_ppo import make_liquid_ppo
|
| 7 |
+
|
| 8 |
+
def train():
|
| 9 |
+
print("Setting up Training Environment...")
|
| 10 |
+
# Create environment
|
| 11 |
+
# We use a lower wind scale for training initially to help it learn
|
| 12 |
+
env = Drone3DEnv(render_mode=None, wind_scale=2.0, wind_speed=1.0)
|
| 13 |
+
|
| 14 |
+
print("Creating Liquid PPO Agent...")
|
| 15 |
+
model = make_liquid_ppo(env, verbose=1)
|
| 16 |
+
|
| 17 |
+
# Create checkpoints directory
|
| 18 |
+
os.makedirs("checkpoints", exist_ok=True)
|
| 19 |
+
checkpoint_callback = CheckpointCallback(
|
| 20 |
+
save_freq=10000,
|
| 21 |
+
save_path="./checkpoints/",
|
| 22 |
+
name_prefix="liquid_ppo_drone"
|
| 23 |
+
)
|
| 24 |
+
|
| 25 |
+
print("Starting Training (This may take a while)...")
|
| 26 |
+
# Training for 500,000 steps (500 episodes) as requested for proper convergence
|
| 27 |
+
total_timesteps = 500000
|
| 28 |
+
model.learn(total_timesteps=total_timesteps, callback=checkpoint_callback)
|
| 29 |
+
|
| 30 |
+
print("Training Complete.")
|
| 31 |
+
model.save("liquid_ppo_drone_final")
|
| 32 |
+
print("Model saved to 'liquid_ppo_drone_final.zip'")
|
| 33 |
+
|
| 34 |
+
if __name__ == "__main__":
|
| 35 |
+
train()
|
train/__init__.py
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Training scripts and utilities."""
|
| 2 |
+
|
train/train_liquid_ppo.py
ADDED
|
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Train a PPO agent with Liquid Neural Network policy on the DroneWindEnv environment.
|
| 3 |
+
|
| 4 |
+
This script uses stable-baselines3 PPO with a Liquid Neural Network feature extractor
|
| 5 |
+
to train an agent to survive and navigate in the 2D drone environment with wind.
|
| 6 |
+
The trained model is saved to models/liquid_policy.zip and TensorBoard logs
|
| 7 |
+
are written to logs/ppo_liquid/.
|
| 8 |
+
"""
|
| 9 |
+
|
| 10 |
+
import os
|
| 11 |
+
import sys
|
| 12 |
+
import argparse
|
| 13 |
+
from typing import Optional
|
| 14 |
+
import gymnasium as gym
|
| 15 |
+
from stable_baselines3 import PPO
|
| 16 |
+
from stable_baselines3.common.vec_env import DummyVecEnv
|
| 17 |
+
from stable_baselines3.common.monitor import Monitor
|
| 18 |
+
|
| 19 |
+
# Add project root to path
|
| 20 |
+
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 21 |
+
|
| 22 |
+
from env.drone_env import DroneWindEnv
|
| 23 |
+
from models.liquid_policy import LiquidFeatureExtractor
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def make_env(seed: Optional[int] = None) -> gym.Env:
|
| 27 |
+
"""
|
| 28 |
+
Create and wrap a DroneWindEnv instance with Monitor.
|
| 29 |
+
|
| 30 |
+
Args:
|
| 31 |
+
seed: Optional random seed for the environment
|
| 32 |
+
|
| 33 |
+
Returns:
|
| 34 |
+
Wrapped Gymnasium environment
|
| 35 |
+
"""
|
| 36 |
+
env = DroneWindEnv()
|
| 37 |
+
env = Monitor(env)
|
| 38 |
+
if seed is not None:
|
| 39 |
+
env.reset(seed=seed)
|
| 40 |
+
return env
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def make_vec_env(num_envs: int = 4) -> DummyVecEnv:
|
| 44 |
+
"""
|
| 45 |
+
Create a vectorized environment with multiple parallel instances.
|
| 46 |
+
|
| 47 |
+
Args:
|
| 48 |
+
num_envs: Number of parallel environments
|
| 49 |
+
|
| 50 |
+
Returns:
|
| 51 |
+
Vectorized environment
|
| 52 |
+
"""
|
| 53 |
+
def make_vec_env_fn(seed: Optional[int] = None):
|
| 54 |
+
def _init():
|
| 55 |
+
return make_env(seed)
|
| 56 |
+
return _init
|
| 57 |
+
|
| 58 |
+
vec_env = DummyVecEnv([make_vec_env_fn(seed=i) for i in range(num_envs)])
|
| 59 |
+
return vec_env
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def main():
|
| 63 |
+
"""Main training function."""
|
| 64 |
+
parser = argparse.ArgumentParser(description="Train PPO agent with Liquid NN on DroneWindEnv")
|
| 65 |
+
parser.add_argument(
|
| 66 |
+
"--timesteps",
|
| 67 |
+
type=int,
|
| 68 |
+
default=100_000,
|
| 69 |
+
help="Total number of training timesteps (default: 100000)"
|
| 70 |
+
)
|
| 71 |
+
parser.add_argument(
|
| 72 |
+
"--seed",
|
| 73 |
+
type=int,
|
| 74 |
+
default=0,
|
| 75 |
+
help="Random seed (default: 0)"
|
| 76 |
+
)
|
| 77 |
+
parser.add_argument(
|
| 78 |
+
"--logdir",
|
| 79 |
+
type=str,
|
| 80 |
+
default="logs/ppo_liquid",
|
| 81 |
+
help="Directory for TensorBoard logs (default: logs/ppo_liquid)"
|
| 82 |
+
)
|
| 83 |
+
parser.add_argument(
|
| 84 |
+
"--model-path",
|
| 85 |
+
type=str,
|
| 86 |
+
default="models/liquid_policy.zip",
|
| 87 |
+
help="Path to save the trained model (default: models/liquid_policy.zip)"
|
| 88 |
+
)
|
| 89 |
+
parser.add_argument(
|
| 90 |
+
"--num-envs",
|
| 91 |
+
type=int,
|
| 92 |
+
default=4,
|
| 93 |
+
help="Number of parallel environments (default: 4)"
|
| 94 |
+
)
|
| 95 |
+
parser.add_argument(
|
| 96 |
+
"--hidden-size",
|
| 97 |
+
type=int,
|
| 98 |
+
default=32,
|
| 99 |
+
help="Hidden size for liquid cell (default: 32)"
|
| 100 |
+
)
|
| 101 |
+
parser.add_argument(
|
| 102 |
+
"--dt",
|
| 103 |
+
type=float,
|
| 104 |
+
default=0.1,
|
| 105 |
+
help="Time step for liquid cell (default: 0.1)"
|
| 106 |
+
)
|
| 107 |
+
|
| 108 |
+
args = parser.parse_args()
|
| 109 |
+
|
| 110 |
+
# Create directories if they don't exist
|
| 111 |
+
os.makedirs(os.path.dirname(args.model_path), exist_ok=True)
|
| 112 |
+
os.makedirs(args.logdir, exist_ok=True)
|
| 113 |
+
|
| 114 |
+
print("=" * 60)
|
| 115 |
+
print("Training PPO Agent with Liquid NN on DroneWindEnv")
|
| 116 |
+
print("=" * 60)
|
| 117 |
+
print(f"Total timesteps: {args.timesteps:,}")
|
| 118 |
+
print(f"Number of parallel environments: {args.num_envs}")
|
| 119 |
+
print(f"Liquid cell hidden size: {args.hidden_size}")
|
| 120 |
+
print(f"Liquid cell dt: {args.dt}")
|
| 121 |
+
print(f"Model will be saved to: {args.model_path}")
|
| 122 |
+
print(f"TensorBoard logs: {args.logdir}")
|
| 123 |
+
print("=" * 60)
|
| 124 |
+
|
| 125 |
+
# Create vectorized environment
|
| 126 |
+
print("Creating vectorized environment...")
|
| 127 |
+
vec_env = make_vec_env(num_envs=args.num_envs)
|
| 128 |
+
|
| 129 |
+
# Get observation space for feature extractor
|
| 130 |
+
obs_space = vec_env.observation_space
|
| 131 |
+
|
| 132 |
+
# Configure policy with liquid feature extractor
|
| 133 |
+
policy_kwargs = dict(
|
| 134 |
+
features_extractor_class=LiquidFeatureExtractor,
|
| 135 |
+
features_extractor_kwargs=dict(
|
| 136 |
+
features_dim=args.hidden_size,
|
| 137 |
+
hidden_size=args.hidden_size,
|
| 138 |
+
dt=args.dt,
|
| 139 |
+
),
|
| 140 |
+
net_arch=dict(pi=[64], vf=[64]), # Policy and value heads with 64 hidden units
|
| 141 |
+
)
|
| 142 |
+
|
| 143 |
+
# Create PPO agent
|
| 144 |
+
print("Initializing PPO agent with Liquid NN...")
|
| 145 |
+
model = PPO(
|
| 146 |
+
policy="MlpPolicy",
|
| 147 |
+
env=vec_env,
|
| 148 |
+
policy_kwargs=policy_kwargs,
|
| 149 |
+
n_steps=1024,
|
| 150 |
+
batch_size=64,
|
| 151 |
+
gamma=0.99,
|
| 152 |
+
learning_rate=3e-4,
|
| 153 |
+
gae_lambda=0.95,
|
| 154 |
+
clip_range=0.2,
|
| 155 |
+
ent_coef=0.01,
|
| 156 |
+
verbose=1,
|
| 157 |
+
tensorboard_log=args.logdir,
|
| 158 |
+
seed=args.seed,
|
| 159 |
+
)
|
| 160 |
+
|
| 161 |
+
# Training with curriculum (commented out for now - use fixed mild wind)
|
| 162 |
+
# For curriculum learning, you could do:
|
| 163 |
+
#
|
| 164 |
+
# # Phase 1: Mild wind (0-30k steps)
|
| 165 |
+
# if args.timesteps > 30000:
|
| 166 |
+
# print("Training phase 1: Mild wind (0-30k steps)...")
|
| 167 |
+
# model.learn(total_timesteps=30000, progress_bar=True)
|
| 168 |
+
#
|
| 169 |
+
# # Phase 2: Medium wind (30k-60k steps)
|
| 170 |
+
# if args.timesteps > 60000:
|
| 171 |
+
# print("Training phase 2: Medium wind (30k-60k steps)...")
|
| 172 |
+
# # Would need to recreate env with difficulty=1
|
| 173 |
+
# model.learn(total_timesteps=30000, progress_bar=True, reset_num_timesteps=False)
|
| 174 |
+
#
|
| 175 |
+
# # Phase 3: Strong wind (60k+ steps)
|
| 176 |
+
# if args.timesteps > 60000:
|
| 177 |
+
# print("Training phase 3: Strong wind (60k+ steps)...")
|
| 178 |
+
# # Would need to recreate env with difficulty=2
|
| 179 |
+
# model.learn(total_timesteps=args.timesteps - 60000, progress_bar=True, reset_num_timesteps=False)
|
| 180 |
+
# else:
|
| 181 |
+
# model.learn(total_timesteps=args.timesteps - 30000, progress_bar=True, reset_num_timesteps=False)
|
| 182 |
+
# else:
|
| 183 |
+
# model.learn(total_timesteps=args.timesteps, progress_bar=True)
|
| 184 |
+
|
| 185 |
+
# For now, train on fixed mild wind
|
| 186 |
+
print("\nStarting training...")
|
| 187 |
+
model.learn(
|
| 188 |
+
total_timesteps=args.timesteps,
|
| 189 |
+
progress_bar=True
|
| 190 |
+
)
|
| 191 |
+
|
| 192 |
+
# Save the model
|
| 193 |
+
print(f"\nSaving model to {args.model_path}...")
|
| 194 |
+
model.save(args.model_path)
|
| 195 |
+
|
| 196 |
+
print("\n" + "=" * 60)
|
| 197 |
+
print("Training completed successfully!")
|
| 198 |
+
print(f"Model saved to: {args.model_path}")
|
| 199 |
+
print(f"TensorBoard logs available at: {args.logdir}")
|
| 200 |
+
print("=" * 60)
|
| 201 |
+
print("\nTo view training progress, run:")
|
| 202 |
+
print(f" tensorboard --logdir {args.logdir}")
|
| 203 |
+
print("\nTo evaluate the model, run:")
|
| 204 |
+
print(f" python eval/eval_liquid_policy.py --model-path {args.model_path}")
|
| 205 |
+
|
| 206 |
+
|
| 207 |
+
if __name__ == "__main__":
|
| 208 |
+
main()
|
| 209 |
+
|
train/train_mlp_ppo.py
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Train a PPO agent with MLP policy on the DroneWindEnv environment.
|
| 3 |
+
|
| 4 |
+
This script uses stable-baselines3 PPO with a 2-layer MLP (64, 64) to train
|
| 5 |
+
an agent to survive and navigate in the 2D drone environment with wind.
|
| 6 |
+
The trained model is saved to models/mlp_baseline.zip and TensorBoard logs
|
| 7 |
+
are written to logs/ppo_mlp/.
|
| 8 |
+
"""
|
| 9 |
+
|
| 10 |
+
import os
|
| 11 |
+
import sys
|
| 12 |
+
import argparse
|
| 13 |
+
from typing import Optional
|
| 14 |
+
import gymnasium as gym
|
| 15 |
+
from stable_baselines3 import PPO
|
| 16 |
+
from stable_baselines3.common.vec_env import DummyVecEnv
|
| 17 |
+
from stable_baselines3.common.monitor import Monitor
|
| 18 |
+
|
| 19 |
+
# Add project root to path
|
| 20 |
+
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 21 |
+
|
| 22 |
+
from env.drone_env import DroneWindEnv
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
def make_env(seed: Optional[int] = None) -> gym.Env:
|
| 26 |
+
"""
|
| 27 |
+
Create and wrap a DroneWindEnv instance with Monitor.
|
| 28 |
+
|
| 29 |
+
Args:
|
| 30 |
+
seed: Optional random seed for the environment
|
| 31 |
+
|
| 32 |
+
Returns:
|
| 33 |
+
Wrapped Gymnasium environment
|
| 34 |
+
"""
|
| 35 |
+
env = DroneWindEnv()
|
| 36 |
+
env = Monitor(env)
|
| 37 |
+
if seed is not None:
|
| 38 |
+
env.reset(seed=seed)
|
| 39 |
+
return env
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
def make_vec_env(num_envs: int = 4) -> DummyVecEnv:
|
| 43 |
+
"""
|
| 44 |
+
Create a vectorized environment with multiple parallel instances.
|
| 45 |
+
|
| 46 |
+
Args:
|
| 47 |
+
num_envs: Number of parallel environments
|
| 48 |
+
|
| 49 |
+
Returns:
|
| 50 |
+
Vectorized environment
|
| 51 |
+
"""
|
| 52 |
+
def make_vec_env_fn(seed: Optional[int] = None):
|
| 53 |
+
def _init():
|
| 54 |
+
return make_env(seed)
|
| 55 |
+
return _init
|
| 56 |
+
|
| 57 |
+
vec_env = DummyVecEnv([make_vec_env_fn(seed=i) for i in range(num_envs)])
|
| 58 |
+
return vec_env
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
def main():
|
| 62 |
+
"""Main training function."""
|
| 63 |
+
parser = argparse.ArgumentParser(description="Train PPO agent on DroneWindEnv")
|
| 64 |
+
parser.add_argument(
|
| 65 |
+
"--timesteps",
|
| 66 |
+
type=int,
|
| 67 |
+
default=100_000,
|
| 68 |
+
help="Total number of training timesteps (default: 100000)"
|
| 69 |
+
)
|
| 70 |
+
parser.add_argument(
|
| 71 |
+
"--seed",
|
| 72 |
+
type=int,
|
| 73 |
+
default=0,
|
| 74 |
+
help="Random seed (default: 0)"
|
| 75 |
+
)
|
| 76 |
+
parser.add_argument(
|
| 77 |
+
"--logdir",
|
| 78 |
+
type=str,
|
| 79 |
+
default="logs/ppo_mlp",
|
| 80 |
+
help="Directory for TensorBoard logs (default: logs/ppo_mlp)"
|
| 81 |
+
)
|
| 82 |
+
parser.add_argument(
|
| 83 |
+
"--model-path",
|
| 84 |
+
type=str,
|
| 85 |
+
default="models/mlp_baseline.zip",
|
| 86 |
+
help="Path to save the trained model (default: models/mlp_baseline.zip)"
|
| 87 |
+
)
|
| 88 |
+
parser.add_argument(
|
| 89 |
+
"--num-envs",
|
| 90 |
+
type=int,
|
| 91 |
+
default=4,
|
| 92 |
+
help="Number of parallel environments (default: 4)"
|
| 93 |
+
)
|
| 94 |
+
|
| 95 |
+
args = parser.parse_args()
|
| 96 |
+
|
| 97 |
+
# Create directories if they don't exist
|
| 98 |
+
os.makedirs(os.path.dirname(args.model_path), exist_ok=True)
|
| 99 |
+
os.makedirs(args.logdir, exist_ok=True)
|
| 100 |
+
|
| 101 |
+
print("=" * 60)
|
| 102 |
+
print("Training PPO Agent on DroneWindEnv")
|
| 103 |
+
print("=" * 60)
|
| 104 |
+
print(f"Total timesteps: {args.timesteps:,}")
|
| 105 |
+
print(f"Number of parallel environments: {args.num_envs}")
|
| 106 |
+
print(f"Model will be saved to: {args.model_path}")
|
| 107 |
+
print(f"TensorBoard logs: {args.logdir}")
|
| 108 |
+
print("=" * 60)
|
| 109 |
+
|
| 110 |
+
# Create vectorized environment
|
| 111 |
+
print("Creating vectorized environment...")
|
| 112 |
+
vec_env = make_vec_env(num_envs=args.num_envs)
|
| 113 |
+
|
| 114 |
+
# Configure policy (2-layer MLP with 64 hidden units each)
|
| 115 |
+
policy_kwargs = dict(net_arch=[64, 64])
|
| 116 |
+
|
| 117 |
+
# Create PPO agent
|
| 118 |
+
print("Initializing PPO agent...")
|
| 119 |
+
model = PPO(
|
| 120 |
+
policy="MlpPolicy",
|
| 121 |
+
env=vec_env,
|
| 122 |
+
policy_kwargs=policy_kwargs,
|
| 123 |
+
n_steps=1024,
|
| 124 |
+
batch_size=64,
|
| 125 |
+
gamma=0.99,
|
| 126 |
+
learning_rate=3e-4,
|
| 127 |
+
gae_lambda=0.95,
|
| 128 |
+
clip_range=0.2,
|
| 129 |
+
ent_coef=0.0,
|
| 130 |
+
verbose=1,
|
| 131 |
+
tensorboard_log=args.logdir,
|
| 132 |
+
seed=args.seed,
|
| 133 |
+
)
|
| 134 |
+
|
| 135 |
+
# Train the agent
|
| 136 |
+
print("\nStarting training...")
|
| 137 |
+
model.learn(
|
| 138 |
+
total_timesteps=args.timesteps,
|
| 139 |
+
progress_bar=True
|
| 140 |
+
)
|
| 141 |
+
|
| 142 |
+
# Save the model
|
| 143 |
+
print(f"\nSaving model to {args.model_path}...")
|
| 144 |
+
model.save(args.model_path)
|
| 145 |
+
|
| 146 |
+
print("\n" + "=" * 60)
|
| 147 |
+
print("Training completed successfully!")
|
| 148 |
+
print(f"Model saved to: {args.model_path}")
|
| 149 |
+
print(f"TensorBoard logs available at: {args.logdir}")
|
| 150 |
+
print("=" * 60)
|
| 151 |
+
print("\nTo view training progress, run:")
|
| 152 |
+
print(f" tensorboard --logdir {args.logdir}")
|
| 153 |
+
print("\nTo evaluate the model, run:")
|
| 154 |
+
print(f" python eval/eval_mlp_baseline.py --model-path {args.model_path}")
|
| 155 |
+
|
| 156 |
+
|
| 157 |
+
if __name__ == "__main__":
|
| 158 |
+
main()
|
| 159 |
+
|
train_hf.py
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import argparse
|
| 3 |
+
from huggingface_hub import HfApi, login
|
| 4 |
+
from stable_baselines3 import PPO
|
| 5 |
+
from stable_baselines3.common.callbacks import CheckpointCallback
|
| 6 |
+
from stable_baselines3.common.env_util import make_vec_env
|
| 7 |
+
from stable_baselines3.common.vec_env import SubprocVecEnv
|
| 8 |
+
from env.drone_3d import Drone3DEnv
|
| 9 |
+
from models.liquid_ppo import make_liquid_ppo, LTCFeatureExtractor
|
| 10 |
+
|
| 11 |
+
def train_hf(repo_id, token, total_timesteps=500000):
|
| 12 |
+
print(f"Starting HF Training for Repo: {repo_id}")
|
| 13 |
+
|
| 14 |
+
# Login to HF
|
| 15 |
+
if token:
|
| 16 |
+
login(token=token)
|
| 17 |
+
|
| 18 |
+
# Create Optimized Model (Parallel Envs + A100 Tuning)
|
| 19 |
+
# Note: make_liquid_ppo now handles env creation internally for parallelism
|
| 20 |
+
model = make_liquid_ppo(None, verbose=1)
|
| 21 |
+
|
| 22 |
+
# Checkpoint Callback
|
| 23 |
+
checkpoint_callback = CheckpointCallback(
|
| 24 |
+
save_freq=50000,
|
| 25 |
+
save_path='./checkpoints/',
|
| 26 |
+
name_prefix='liquid_ppo_drone'
|
| 27 |
+
)
|
| 28 |
+
|
| 29 |
+
print(f"Training for {total_timesteps} steps...")
|
| 30 |
+
model.learn(total_timesteps=total_timesteps, callback=checkpoint_callback)
|
| 31 |
+
|
| 32 |
+
# Save Final Model
|
| 33 |
+
model_path = "liquid_ppo_drone_final.zip"
|
| 34 |
+
model.save(model_path)
|
| 35 |
+
print(f"Model saved to {model_path}")
|
| 36 |
+
|
| 37 |
+
# Push to Hub
|
| 38 |
+
print("Pushing to Hugging Face Hub...")
|
| 39 |
+
api = HfApi()
|
| 40 |
+
|
| 41 |
+
try:
|
| 42 |
+
# Create repo if it doesn't exist
|
| 43 |
+
api.create_repo(repo_id=repo_id, exist_ok=True)
|
| 44 |
+
|
| 45 |
+
# Upload Model
|
| 46 |
+
api.upload_file(
|
| 47 |
+
path_or_fileobj=model_path,
|
| 48 |
+
path_in_repo="liquid_ppo_drone_final.zip",
|
| 49 |
+
repo_id=repo_id,
|
| 50 |
+
repo_type="model"
|
| 51 |
+
)
|
| 52 |
+
print("Upload Complete!")
|
| 53 |
+
|
| 54 |
+
except Exception as e:
|
| 55 |
+
print(f"Error uploading to Hub: {e}")
|
| 56 |
+
|
| 57 |
+
if __name__ == "__main__":
|
| 58 |
+
parser = argparse.ArgumentParser()
|
| 59 |
+
parser.add_argument("--repo_id", type=str, required=True, help="HF Repo ID (e.g., username/neuro-flyt-3d)")
|
| 60 |
+
parser.add_argument("--token", type=str, help="HF Write Token")
|
| 61 |
+
parser.add_argument("--steps", type=int, default=500000, help="Total training steps")
|
| 62 |
+
|
| 63 |
+
args = parser.parse_args()
|
| 64 |
+
|
| 65 |
+
# Get token from env var if not provided
|
| 66 |
+
token = args.token or os.environ.get("HF_TOKEN")
|
| 67 |
+
|
| 68 |
+
train_hf(args.repo_id, token, args.steps)
|
train_log_500k.txt
ADDED
|
@@ -0,0 +1,1974 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Setting up Training Environment...
|
| 2 |
+
Creating Liquid PPO Agent...
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
Starting Training (This may take a while)...
|
| 7 |
+
----------------------------------
|
| 8 |
+
| rollout/ | |
|
| 9 |
+
| ep_len_mean | 1e+03 |
|
| 10 |
+
| ep_rew_mean | -2.56e+04 |
|
| 11 |
+
| time/ | |
|
| 12 |
+
| fps | 608 |
|
| 13 |
+
| iterations | 1 |
|
| 14 |
+
| time_elapsed | 3 |
|
| 15 |
+
| total_timesteps | 2048 |
|
| 16 |
+
----------------------------------
|
| 17 |
+
------------------------------------------
|
| 18 |
+
| rollout/ | |
|
| 19 |
+
| ep_len_mean | 1e+03 |
|
| 20 |
+
| ep_rew_mean | -2.23e+04 |
|
| 21 |
+
| time/ | |
|
| 22 |
+
| fps | 177 |
|
| 23 |
+
| iterations | 2 |
|
| 24 |
+
| time_elapsed | 23 |
|
| 25 |
+
| total_timesteps | 4096 |
|
| 26 |
+
| train/ | |
|
| 27 |
+
| approx_kl | 0.0049500195 |
|
| 28 |
+
| clip_fraction | 0.0341 |
|
| 29 |
+
| clip_range | 0.2 |
|
| 30 |
+
| entropy_loss | -5.67 |
|
| 31 |
+
| explained_variance | -9.31e-05 |
|
| 32 |
+
| learning_rate | 0.0003 |
|
| 33 |
+
| loss | 1.15e+05 |
|
| 34 |
+
| n_updates | 10 |
|
| 35 |
+
| policy_gradient_loss | -0.00313 |
|
| 36 |
+
| std | 0.999 |
|
| 37 |
+
| value_loss | 1.99e+05 |
|
| 38 |
+
------------------------------------------
|
| 39 |
+
-----------------------------------------
|
| 40 |
+
| rollout/ | |
|
| 41 |
+
| ep_len_mean | 1e+03 |
|
| 42 |
+
| ep_rew_mean | -2.19e+04 |
|
| 43 |
+
| time/ | |
|
| 44 |
+
| fps | 129 |
|
| 45 |
+
| iterations | 3 |
|
| 46 |
+
| time_elapsed | 47 |
|
| 47 |
+
| total_timesteps | 6144 |
|
| 48 |
+
| train/ | |
|
| 49 |
+
| approx_kl | 0.002073763 |
|
| 50 |
+
| clip_fraction | 0.00542 |
|
| 51 |
+
| clip_range | 0.2 |
|
| 52 |
+
| entropy_loss | -5.67 |
|
| 53 |
+
| explained_variance | 2.62e-05 |
|
| 54 |
+
| learning_rate | 0.0003 |
|
| 55 |
+
| loss | 5.71e+04 |
|
| 56 |
+
| n_updates | 20 |
|
| 57 |
+
| policy_gradient_loss | -0.000381 |
|
| 58 |
+
| std | 0.996 |
|
| 59 |
+
| value_loss | 1.15e+05 |
|
| 60 |
+
-----------------------------------------
|
| 61 |
+
------------------------------------------
|
| 62 |
+
| rollout/ | |
|
| 63 |
+
| ep_len_mean | 1e+03 |
|
| 64 |
+
| ep_rew_mean | -2.28e+04 |
|
| 65 |
+
| time/ | |
|
| 66 |
+
| fps | 116 |
|
| 67 |
+
| iterations | 4 |
|
| 68 |
+
| time_elapsed | 70 |
|
| 69 |
+
| total_timesteps | 8192 |
|
| 70 |
+
| train/ | |
|
| 71 |
+
| approx_kl | 0.0047560623 |
|
| 72 |
+
| clip_fraction | 0.0275 |
|
| 73 |
+
| clip_range | 0.2 |
|
| 74 |
+
| entropy_loss | -5.64 |
|
| 75 |
+
| explained_variance | 8.34e-07 |
|
| 76 |
+
| learning_rate | 0.0003 |
|
| 77 |
+
| loss | 7.31e+04 |
|
| 78 |
+
| n_updates | 30 |
|
| 79 |
+
| policy_gradient_loss | -0.00268 |
|
| 80 |
+
| std | 0.988 |
|
| 81 |
+
| value_loss | 1.42e+05 |
|
| 82 |
+
------------------------------------------
|
| 83 |
+
-----------------------------------------
|
| 84 |
+
| rollout/ | |
|
| 85 |
+
| ep_len_mean | 1e+03 |
|
| 86 |
+
| ep_rew_mean | -2.4e+04 |
|
| 87 |
+
| time/ | |
|
| 88 |
+
| fps | 93 |
|
| 89 |
+
| iterations | 5 |
|
| 90 |
+
| time_elapsed | 109 |
|
| 91 |
+
| total_timesteps | 10240 |
|
| 92 |
+
| train/ | |
|
| 93 |
+
| approx_kl | 0.004183922 |
|
| 94 |
+
| clip_fraction | 0.0234 |
|
| 95 |
+
| clip_range | 0.2 |
|
| 96 |
+
| entropy_loss | -5.62 |
|
| 97 |
+
| explained_variance | -4.77e-07 |
|
| 98 |
+
| learning_rate | 0.0003 |
|
| 99 |
+
| loss | 1.35e+05 |
|
| 100 |
+
| n_updates | 40 |
|
| 101 |
+
| policy_gradient_loss | -0.003 |
|
| 102 |
+
| std | 0.985 |
|
| 103 |
+
| value_loss | 2.21e+05 |
|
| 104 |
+
-----------------------------------------
|
| 105 |
+
-----------------------------------------
|
| 106 |
+
| rollout/ | |
|
| 107 |
+
| ep_len_mean | 1e+03 |
|
| 108 |
+
| ep_rew_mean | -2.28e+04 |
|
| 109 |
+
| time/ | |
|
| 110 |
+
| fps | 98 |
|
| 111 |
+
| iterations | 6 |
|
| 112 |
+
| time_elapsed | 125 |
|
| 113 |
+
| total_timesteps | 12288 |
|
| 114 |
+
| train/ | |
|
| 115 |
+
| approx_kl | 0.005293761 |
|
| 116 |
+
| clip_fraction | 0.0418 |
|
| 117 |
+
| clip_range | 0.2 |
|
| 118 |
+
| entropy_loss | -5.61 |
|
| 119 |
+
| explained_variance | -3.58e-07 |
|
| 120 |
+
| learning_rate | 0.0003 |
|
| 121 |
+
| loss | 1.32e+05 |
|
| 122 |
+
| n_updates | 50 |
|
| 123 |
+
| policy_gradient_loss | -0.00347 |
|
| 124 |
+
| std | 0.985 |
|
| 125 |
+
| value_loss | 2.86e+05 |
|
| 126 |
+
-----------------------------------------
|
| 127 |
+
------------------------------------------
|
| 128 |
+
| rollout/ | |
|
| 129 |
+
| ep_len_mean | 1e+03 |
|
| 130 |
+
| ep_rew_mean | -2.41e+04 |
|
| 131 |
+
| time/ | |
|
| 132 |
+
| fps | 97 |
|
| 133 |
+
| iterations | 7 |
|
| 134 |
+
| time_elapsed | 146 |
|
| 135 |
+
| total_timesteps | 14336 |
|
| 136 |
+
| train/ | |
|
| 137 |
+
| approx_kl | 0.0050999783 |
|
| 138 |
+
| clip_fraction | 0.0295 |
|
| 139 |
+
| clip_range | 0.2 |
|
| 140 |
+
| entropy_loss | -5.59 |
|
| 141 |
+
| explained_variance | -7.15e-07 |
|
| 142 |
+
| learning_rate | 0.0003 |
|
| 143 |
+
| loss | 4.66e+04 |
|
| 144 |
+
| n_updates | 60 |
|
| 145 |
+
| policy_gradient_loss | -0.00355 |
|
| 146 |
+
| std | 0.974 |
|
| 147 |
+
| value_loss | 8.14e+04 |
|
| 148 |
+
------------------------------------------
|
| 149 |
+
------------------------------------------
|
| 150 |
+
| rollout/ | |
|
| 151 |
+
| ep_len_mean | 1e+03 |
|
| 152 |
+
| ep_rew_mean | -2.6e+04 |
|
| 153 |
+
| time/ | |
|
| 154 |
+
| fps | 103 |
|
| 155 |
+
| iterations | 8 |
|
| 156 |
+
| time_elapsed | 158 |
|
| 157 |
+
| total_timesteps | 16384 |
|
| 158 |
+
| train/ | |
|
| 159 |
+
| approx_kl | 0.0042739166 |
|
| 160 |
+
| clip_fraction | 0.0147 |
|
| 161 |
+
| clip_range | 0.2 |
|
| 162 |
+
| entropy_loss | -5.57 |
|
| 163 |
+
| explained_variance | 0 |
|
| 164 |
+
| learning_rate | 0.0003 |
|
| 165 |
+
| loss | 1.64e+05 |
|
| 166 |
+
| n_updates | 70 |
|
| 167 |
+
| policy_gradient_loss | -0.0019 |
|
| 168 |
+
| std | 0.972 |
|
| 169 |
+
| value_loss | 3.31e+05 |
|
| 170 |
+
------------------------------------------
|
| 171 |
+
------------------------------------------
|
| 172 |
+
| rollout/ | |
|
| 173 |
+
| ep_len_mean | 1e+03 |
|
| 174 |
+
| ep_rew_mean | -2.7e+04 |
|
| 175 |
+
| time/ | |
|
| 176 |
+
| fps | 108 |
|
| 177 |
+
| iterations | 9 |
|
| 178 |
+
| time_elapsed | 170 |
|
| 179 |
+
| total_timesteps | 18432 |
|
| 180 |
+
| train/ | |
|
| 181 |
+
| approx_kl | 0.0053871158 |
|
| 182 |
+
| clip_fraction | 0.0297 |
|
| 183 |
+
| clip_range | 0.2 |
|
| 184 |
+
| entropy_loss | -5.56 |
|
| 185 |
+
| explained_variance | 0 |
|
| 186 |
+
| learning_rate | 0.0003 |
|
| 187 |
+
| loss | 2.43e+05 |
|
| 188 |
+
| n_updates | 80 |
|
| 189 |
+
| policy_gradient_loss | -0.00304 |
|
| 190 |
+
| std | 0.972 |
|
| 191 |
+
| value_loss | 5.33e+05 |
|
| 192 |
+
------------------------------------------
|
| 193 |
+
------------------------------------------
|
| 194 |
+
| rollout/ | |
|
| 195 |
+
| ep_len_mean | 1e+03 |
|
| 196 |
+
| ep_rew_mean | -2.81e+04 |
|
| 197 |
+
| time/ | |
|
| 198 |
+
| fps | 112 |
|
| 199 |
+
| iterations | 10 |
|
| 200 |
+
| time_elapsed | 181 |
|
| 201 |
+
| total_timesteps | 20480 |
|
| 202 |
+
| train/ | |
|
| 203 |
+
| approx_kl | 0.0035741455 |
|
| 204 |
+
| clip_fraction | 0.0138 |
|
| 205 |
+
| clip_range | 0.2 |
|
| 206 |
+
| entropy_loss | -5.56 |
|
| 207 |
+
| explained_variance | -1.19e-07 |
|
| 208 |
+
| learning_rate | 0.0003 |
|
| 209 |
+
| loss | 1.74e+05 |
|
| 210 |
+
| n_updates | 90 |
|
| 211 |
+
| policy_gradient_loss | -0.00138 |
|
| 212 |
+
| std | 0.971 |
|
| 213 |
+
| value_loss | 3.49e+05 |
|
| 214 |
+
------------------------------------------
|
| 215 |
+
-----------------------------------------
|
| 216 |
+
| rollout/ | |
|
| 217 |
+
| ep_len_mean | 1e+03 |
|
| 218 |
+
| ep_rew_mean | -2.79e+04 |
|
| 219 |
+
| time/ | |
|
| 220 |
+
| fps | 116 |
|
| 221 |
+
| iterations | 11 |
|
| 222 |
+
| time_elapsed | 193 |
|
| 223 |
+
| total_timesteps | 22528 |
|
| 224 |
+
| train/ | |
|
| 225 |
+
| approx_kl | 0.004108442 |
|
| 226 |
+
| clip_fraction | 0.0245 |
|
| 227 |
+
| clip_range | 0.2 |
|
| 228 |
+
| entropy_loss | -5.56 |
|
| 229 |
+
| explained_variance | 0 |
|
| 230 |
+
| learning_rate | 0.0003 |
|
| 231 |
+
| loss | 2.53e+05 |
|
| 232 |
+
| n_updates | 100 |
|
| 233 |
+
| policy_gradient_loss | -0.00274 |
|
| 234 |
+
| std | 0.971 |
|
| 235 |
+
| value_loss | 5.84e+05 |
|
| 236 |
+
-----------------------------------------
|
| 237 |
+
------------------------------------------
|
| 238 |
+
| rollout/ | |
|
| 239 |
+
| ep_len_mean | 1e+03 |
|
| 240 |
+
| ep_rew_mean | -2.84e+04 |
|
| 241 |
+
| time/ | |
|
| 242 |
+
| fps | 119 |
|
| 243 |
+
| iterations | 12 |
|
| 244 |
+
| time_elapsed | 205 |
|
| 245 |
+
| total_timesteps | 24576 |
|
| 246 |
+
| train/ | |
|
| 247 |
+
| approx_kl | 0.0057261223 |
|
| 248 |
+
| clip_fraction | 0.0375 |
|
| 249 |
+
| clip_range | 0.2 |
|
| 250 |
+
| entropy_loss | -5.53 |
|
| 251 |
+
| explained_variance | -1.19e-07 |
|
| 252 |
+
| learning_rate | 0.0003 |
|
| 253 |
+
| loss | 8.12e+04 |
|
| 254 |
+
| n_updates | 110 |
|
| 255 |
+
| policy_gradient_loss | -0.003 |
|
| 256 |
+
| std | 0.96 |
|
| 257 |
+
| value_loss | 1.72e+05 |
|
| 258 |
+
------------------------------------------
|
| 259 |
+
------------------------------------------
|
| 260 |
+
| rollout/ | |
|
| 261 |
+
| ep_len_mean | 1e+03 |
|
| 262 |
+
| ep_rew_mean | -2.82e+04 |
|
| 263 |
+
| time/ | |
|
| 264 |
+
| fps | 122 |
|
| 265 |
+
| iterations | 13 |
|
| 266 |
+
| time_elapsed | 217 |
|
| 267 |
+
| total_timesteps | 26624 |
|
| 268 |
+
| train/ | |
|
| 269 |
+
| approx_kl | 0.0051155365 |
|
| 270 |
+
| clip_fraction | 0.0225 |
|
| 271 |
+
| clip_range | 0.2 |
|
| 272 |
+
| entropy_loss | -5.5 |
|
| 273 |
+
| explained_variance | 0 |
|
| 274 |
+
| learning_rate | 0.0003 |
|
| 275 |
+
| loss | 2.4e+05 |
|
| 276 |
+
| n_updates | 120 |
|
| 277 |
+
| policy_gradient_loss | -0.00331 |
|
| 278 |
+
| std | 0.955 |
|
| 279 |
+
| value_loss | 4.7e+05 |
|
| 280 |
+
------------------------------------------
|
| 281 |
+
-----------------------------------------
|
| 282 |
+
| rollout/ | |
|
| 283 |
+
| ep_len_mean | 1e+03 |
|
| 284 |
+
| ep_rew_mean | -2.83e+04 |
|
| 285 |
+
| time/ | |
|
| 286 |
+
| fps | 125 |
|
| 287 |
+
| iterations | 14 |
|
| 288 |
+
| time_elapsed | 228 |
|
| 289 |
+
| total_timesteps | 28672 |
|
| 290 |
+
| train/ | |
|
| 291 |
+
| approx_kl | 0.005322621 |
|
| 292 |
+
| clip_fraction | 0.042 |
|
| 293 |
+
| clip_range | 0.2 |
|
| 294 |
+
| entropy_loss | -5.48 |
|
| 295 |
+
| explained_variance | 0 |
|
| 296 |
+
| learning_rate | 0.0003 |
|
| 297 |
+
| loss | 1.33e+05 |
|
| 298 |
+
| n_updates | 130 |
|
| 299 |
+
| policy_gradient_loss | -0.00404 |
|
| 300 |
+
| std | 0.952 |
|
| 301 |
+
| value_loss | 2.58e+05 |
|
| 302 |
+
-----------------------------------------
|
| 303 |
+
-----------------------------------------
|
| 304 |
+
| rollout/ | |
|
| 305 |
+
| ep_len_mean | 1e+03 |
|
| 306 |
+
| ep_rew_mean | -2.78e+04 |
|
| 307 |
+
| time/ | |
|
| 308 |
+
| fps | 127 |
|
| 309 |
+
| iterations | 15 |
|
| 310 |
+
| time_elapsed | 240 |
|
| 311 |
+
| total_timesteps | 30720 |
|
| 312 |
+
| train/ | |
|
| 313 |
+
| approx_kl | 0.006120109 |
|
| 314 |
+
| clip_fraction | 0.0588 |
|
| 315 |
+
| clip_range | 0.2 |
|
| 316 |
+
| entropy_loss | -5.46 |
|
| 317 |
+
| explained_variance | 0 |
|
| 318 |
+
| learning_rate | 0.0003 |
|
| 319 |
+
| loss | 1.32e+05 |
|
| 320 |
+
| n_updates | 140 |
|
| 321 |
+
| policy_gradient_loss | -0.00445 |
|
| 322 |
+
| std | 0.942 |
|
| 323 |
+
| value_loss | 2.73e+05 |
|
| 324 |
+
-----------------------------------------
|
| 325 |
+
-----------------------------------------
|
| 326 |
+
| rollout/ | |
|
| 327 |
+
| ep_len_mean | 1e+03 |
|
| 328 |
+
| ep_rew_mean | -2.79e+04 |
|
| 329 |
+
| time/ | |
|
| 330 |
+
| fps | 129 |
|
| 331 |
+
| iterations | 16 |
|
| 332 |
+
| time_elapsed | 253 |
|
| 333 |
+
| total_timesteps | 32768 |
|
| 334 |
+
| train/ | |
|
| 335 |
+
| approx_kl | 0.004814163 |
|
| 336 |
+
| clip_fraction | 0.0178 |
|
| 337 |
+
| clip_range | 0.2 |
|
| 338 |
+
| entropy_loss | -5.43 |
|
| 339 |
+
| explained_variance | 1.19e-07 |
|
| 340 |
+
| learning_rate | 0.0003 |
|
| 341 |
+
| loss | 5.79e+04 |
|
| 342 |
+
| n_updates | 150 |
|
| 343 |
+
| policy_gradient_loss | -0.000932 |
|
| 344 |
+
| std | 0.939 |
|
| 345 |
+
| value_loss | 1.42e+05 |
|
| 346 |
+
-----------------------------------------
|
| 347 |
+
------------------------------------------
|
| 348 |
+
| rollout/ | |
|
| 349 |
+
| ep_len_mean | 1e+03 |
|
| 350 |
+
| ep_rew_mean | -2.82e+04 |
|
| 351 |
+
| time/ | |
|
| 352 |
+
| fps | 130 |
|
| 353 |
+
| iterations | 17 |
|
| 354 |
+
| time_elapsed | 266 |
|
| 355 |
+
| total_timesteps | 34816 |
|
| 356 |
+
| train/ | |
|
| 357 |
+
| approx_kl | 0.0027875581 |
|
| 358 |
+
| clip_fraction | 0.0163 |
|
| 359 |
+
| clip_range | 0.2 |
|
| 360 |
+
| entropy_loss | -5.43 |
|
| 361 |
+
| explained_variance | 1.19e-07 |
|
| 362 |
+
| learning_rate | 0.0003 |
|
| 363 |
+
| loss | 1.62e+05 |
|
| 364 |
+
| n_updates | 160 |
|
| 365 |
+
| policy_gradient_loss | -0.00144 |
|
| 366 |
+
| std | 0.94 |
|
| 367 |
+
| value_loss | 3.25e+05 |
|
| 368 |
+
------------------------------------------
|
| 369 |
+
------------------------------------------
|
| 370 |
+
| rollout/ | |
|
| 371 |
+
| ep_len_mean | 1e+03 |
|
| 372 |
+
| ep_rew_mean | -2.84e+04 |
|
| 373 |
+
| time/ | |
|
| 374 |
+
| fps | 132 |
|
| 375 |
+
| iterations | 18 |
|
| 376 |
+
| time_elapsed | 277 |
|
| 377 |
+
| total_timesteps | 36864 |
|
| 378 |
+
| train/ | |
|
| 379 |
+
| approx_kl | 0.0035902325 |
|
| 380 |
+
| clip_fraction | 0.0154 |
|
| 381 |
+
| clip_range | 0.2 |
|
| 382 |
+
| entropy_loss | -5.43 |
|
| 383 |
+
| explained_variance | 1.19e-07 |
|
| 384 |
+
| learning_rate | 0.0003 |
|
| 385 |
+
| loss | 1.59e+05 |
|
| 386 |
+
| n_updates | 170 |
|
| 387 |
+
| policy_gradient_loss | -0.00172 |
|
| 388 |
+
| std | 0.942 |
|
| 389 |
+
| value_loss | 3.88e+05 |
|
| 390 |
+
------------------------------------------
|
| 391 |
+
------------------------------------------
|
| 392 |
+
| rollout/ | |
|
| 393 |
+
| ep_len_mean | 1e+03 |
|
| 394 |
+
| ep_rew_mean | -2.83e+04 |
|
| 395 |
+
| time/ | |
|
| 396 |
+
| fps | 134 |
|
| 397 |
+
| iterations | 19 |
|
| 398 |
+
| time_elapsed | 289 |
|
| 399 |
+
| total_timesteps | 38912 |
|
| 400 |
+
| train/ | |
|
| 401 |
+
| approx_kl | 0.0044348813 |
|
| 402 |
+
| clip_fraction | 0.025 |
|
| 403 |
+
| clip_range | 0.2 |
|
| 404 |
+
| entropy_loss | -5.44 |
|
| 405 |
+
| explained_variance | -1.19e-07 |
|
| 406 |
+
| learning_rate | 0.0003 |
|
| 407 |
+
| loss | 1.36e+05 |
|
| 408 |
+
| n_updates | 180 |
|
| 409 |
+
| policy_gradient_loss | -0.00187 |
|
| 410 |
+
| std | 0.943 |
|
| 411 |
+
| value_loss | 2.4e+05 |
|
| 412 |
+
------------------------------------------
|
| 413 |
+
-----------------------------------------
|
| 414 |
+
| rollout/ | |
|
| 415 |
+
| ep_len_mean | 1e+03 |
|
| 416 |
+
| ep_rew_mean | -2.77e+04 |
|
| 417 |
+
| time/ | |
|
| 418 |
+
| fps | 136 |
|
| 419 |
+
| iterations | 20 |
|
| 420 |
+
| time_elapsed | 300 |
|
| 421 |
+
| total_timesteps | 40960 |
|
| 422 |
+
| train/ | |
|
| 423 |
+
| approx_kl | 0.003115609 |
|
| 424 |
+
| clip_fraction | 0.0162 |
|
| 425 |
+
| clip_range | 0.2 |
|
| 426 |
+
| entropy_loss | -5.42 |
|
| 427 |
+
| explained_variance | 0 |
|
| 428 |
+
| learning_rate | 0.0003 |
|
| 429 |
+
| loss | 1.02e+05 |
|
| 430 |
+
| n_updates | 190 |
|
| 431 |
+
| policy_gradient_loss | -0.00152 |
|
| 432 |
+
| std | 0.936 |
|
| 433 |
+
| value_loss | 2.02e+05 |
|
| 434 |
+
-----------------------------------------
|
| 435 |
+
------------------------------------------
|
| 436 |
+
| rollout/ | |
|
| 437 |
+
| ep_len_mean | 1e+03 |
|
| 438 |
+
| ep_rew_mean | -2.72e+04 |
|
| 439 |
+
| time/ | |
|
| 440 |
+
| fps | 136 |
|
| 441 |
+
| iterations | 21 |
|
| 442 |
+
| time_elapsed | 314 |
|
| 443 |
+
| total_timesteps | 43008 |
|
| 444 |
+
| train/ | |
|
| 445 |
+
| approx_kl | 0.0044121114 |
|
| 446 |
+
| clip_fraction | 0.0311 |
|
| 447 |
+
| clip_range | 0.2 |
|
| 448 |
+
| entropy_loss | -5.42 |
|
| 449 |
+
| explained_variance | -1.19e-07 |
|
| 450 |
+
| learning_rate | 0.0003 |
|
| 451 |
+
| loss | 4.9e+04 |
|
| 452 |
+
| n_updates | 200 |
|
| 453 |
+
| policy_gradient_loss | -0.00261 |
|
| 454 |
+
| std | 0.941 |
|
| 455 |
+
| value_loss | 1.09e+05 |
|
| 456 |
+
------------------------------------------
|
| 457 |
+
------------------------------------------
|
| 458 |
+
| rollout/ | |
|
| 459 |
+
| ep_len_mean | 1e+03 |
|
| 460 |
+
| ep_rew_mean | -2.69e+04 |
|
| 461 |
+
| time/ | |
|
| 462 |
+
| fps | 138 |
|
| 463 |
+
| iterations | 22 |
|
| 464 |
+
| time_elapsed | 326 |
|
| 465 |
+
| total_timesteps | 45056 |
|
| 466 |
+
| train/ | |
|
| 467 |
+
| approx_kl | 0.0050966754 |
|
| 468 |
+
| clip_fraction | 0.0294 |
|
| 469 |
+
| clip_range | 0.2 |
|
| 470 |
+
| entropy_loss | -5.41 |
|
| 471 |
+
| explained_variance | 1.19e-07 |
|
| 472 |
+
| learning_rate | 0.0003 |
|
| 473 |
+
| loss | 5.13e+04 |
|
| 474 |
+
| n_updates | 210 |
|
| 475 |
+
| policy_gradient_loss | -0.00221 |
|
| 476 |
+
| std | 0.933 |
|
| 477 |
+
| value_loss | 1.11e+05 |
|
| 478 |
+
------------------------------------------
|
| 479 |
+
------------------------------------------
|
| 480 |
+
| rollout/ | |
|
| 481 |
+
| ep_len_mean | 1e+03 |
|
| 482 |
+
| ep_rew_mean | -2.71e+04 |
|
| 483 |
+
| time/ | |
|
| 484 |
+
| fps | 139 |
|
| 485 |
+
| iterations | 23 |
|
| 486 |
+
| time_elapsed | 337 |
|
| 487 |
+
| total_timesteps | 47104 |
|
| 488 |
+
| train/ | |
|
| 489 |
+
| approx_kl | 0.0042023044 |
|
| 490 |
+
| clip_fraction | 0.0154 |
|
| 491 |
+
| clip_range | 0.2 |
|
| 492 |
+
| entropy_loss | -5.4 |
|
| 493 |
+
| explained_variance | 0 |
|
| 494 |
+
| learning_rate | 0.0003 |
|
| 495 |
+
| loss | 5.53e+04 |
|
| 496 |
+
| n_updates | 220 |
|
| 497 |
+
| policy_gradient_loss | -0.000932 |
|
| 498 |
+
| std | 0.934 |
|
| 499 |
+
| value_loss | 1.32e+05 |
|
| 500 |
+
------------------------------------------
|
| 501 |
+
------------------------------------------
|
| 502 |
+
| rollout/ | |
|
| 503 |
+
| ep_len_mean | 1e+03 |
|
| 504 |
+
| ep_rew_mean | -2.74e+04 |
|
| 505 |
+
| time/ | |
|
| 506 |
+
| fps | 140 |
|
| 507 |
+
| iterations | 24 |
|
| 508 |
+
| time_elapsed | 348 |
|
| 509 |
+
| total_timesteps | 49152 |
|
| 510 |
+
| train/ | |
|
| 511 |
+
| approx_kl | 0.0060270163 |
|
| 512 |
+
| clip_fraction | 0.0548 |
|
| 513 |
+
| clip_range | 0.2 |
|
| 514 |
+
| entropy_loss | -5.4 |
|
| 515 |
+
| explained_variance | -1.19e-07 |
|
| 516 |
+
| learning_rate | 0.0003 |
|
| 517 |
+
| loss | 1.27e+05 |
|
| 518 |
+
| n_updates | 230 |
|
| 519 |
+
| policy_gradient_loss | -0.00514 |
|
| 520 |
+
| std | 0.932 |
|
| 521 |
+
| value_loss | 2.93e+05 |
|
| 522 |
+
------------------------------------------
|
| 523 |
+
-----------------------------------------
|
| 524 |
+
| rollout/ | |
|
| 525 |
+
| ep_len_mean | 1e+03 |
|
| 526 |
+
| ep_rew_mean | -2.77e+04 |
|
| 527 |
+
| time/ | |
|
| 528 |
+
| fps | 141 |
|
| 529 |
+
| iterations | 25 |
|
| 530 |
+
| time_elapsed | 361 |
|
| 531 |
+
| total_timesteps | 51200 |
|
| 532 |
+
| train/ | |
|
| 533 |
+
| approx_kl | 0.003641401 |
|
| 534 |
+
| clip_fraction | 0.0161 |
|
| 535 |
+
| clip_range | 0.2 |
|
| 536 |
+
| entropy_loss | -5.4 |
|
| 537 |
+
| explained_variance | 0 |
|
| 538 |
+
| learning_rate | 0.0003 |
|
| 539 |
+
| loss | 1.7e+05 |
|
| 540 |
+
| n_updates | 240 |
|
| 541 |
+
| policy_gradient_loss | -0.00216 |
|
| 542 |
+
| std | 0.937 |
|
| 543 |
+
| value_loss | 3.48e+05 |
|
| 544 |
+
-----------------------------------------
|
| 545 |
+
------------------------------------------
|
| 546 |
+
| rollout/ | |
|
| 547 |
+
| ep_len_mean | 1e+03 |
|
| 548 |
+
| ep_rew_mean | -2.79e+04 |
|
| 549 |
+
| time/ | |
|
| 550 |
+
| fps | 142 |
|
| 551 |
+
| iterations | 26 |
|
| 552 |
+
| time_elapsed | 372 |
|
| 553 |
+
| total_timesteps | 53248 |
|
| 554 |
+
| train/ | |
|
| 555 |
+
| approx_kl | 0.0040730843 |
|
| 556 |
+
| clip_fraction | 0.0225 |
|
| 557 |
+
| clip_range | 0.2 |
|
| 558 |
+
| entropy_loss | -5.41 |
|
| 559 |
+
| explained_variance | 5.96e-08 |
|
| 560 |
+
| learning_rate | 0.0003 |
|
| 561 |
+
| loss | 2.05e+05 |
|
| 562 |
+
| n_updates | 250 |
|
| 563 |
+
| policy_gradient_loss | -0.00147 |
|
| 564 |
+
| std | 0.934 |
|
| 565 |
+
| value_loss | 4.28e+05 |
|
| 566 |
+
------------------------------------------
|
| 567 |
+
-----------------------------------------
|
| 568 |
+
| rollout/ | |
|
| 569 |
+
| ep_len_mean | 1e+03 |
|
| 570 |
+
| ep_rew_mean | -2.8e+04 |
|
| 571 |
+
| time/ | |
|
| 572 |
+
| fps | 143 |
|
| 573 |
+
| iterations | 27 |
|
| 574 |
+
| time_elapsed | 384 |
|
| 575 |
+
| total_timesteps | 55296 |
|
| 576 |
+
| train/ | |
|
| 577 |
+
| approx_kl | 0.003144626 |
|
| 578 |
+
| clip_fraction | 0.00791 |
|
| 579 |
+
| clip_range | 0.2 |
|
| 580 |
+
| entropy_loss | -5.42 |
|
| 581 |
+
| explained_variance | 0 |
|
| 582 |
+
| learning_rate | 0.0003 |
|
| 583 |
+
| loss | 1.71e+05 |
|
| 584 |
+
| n_updates | 260 |
|
| 585 |
+
| policy_gradient_loss | -0.00195 |
|
| 586 |
+
| std | 0.94 |
|
| 587 |
+
| value_loss | 3.93e+05 |
|
| 588 |
+
-----------------------------------------
|
| 589 |
+
------------------------------------------
|
| 590 |
+
| rollout/ | |
|
| 591 |
+
| ep_len_mean | 1e+03 |
|
| 592 |
+
| ep_rew_mean | -2.83e+04 |
|
| 593 |
+
| time/ | |
|
| 594 |
+
| fps | 144 |
|
| 595 |
+
| iterations | 28 |
|
| 596 |
+
| time_elapsed | 397 |
|
| 597 |
+
| total_timesteps | 57344 |
|
| 598 |
+
| train/ | |
|
| 599 |
+
| approx_kl | 0.0052720373 |
|
| 600 |
+
| clip_fraction | 0.0272 |
|
| 601 |
+
| clip_range | 0.2 |
|
| 602 |
+
| entropy_loss | -5.42 |
|
| 603 |
+
| explained_variance | 0 |
|
| 604 |
+
| learning_rate | 0.0003 |
|
| 605 |
+
| loss | 1.75e+05 |
|
| 606 |
+
| n_updates | 270 |
|
| 607 |
+
| policy_gradient_loss | -0.00242 |
|
| 608 |
+
| std | 0.935 |
|
| 609 |
+
| value_loss | 3.07e+05 |
|
| 610 |
+
------------------------------------------
|
| 611 |
+
------------------------------------------
|
| 612 |
+
| rollout/ | |
|
| 613 |
+
| ep_len_mean | 1e+03 |
|
| 614 |
+
| ep_rew_mean | -2.79e+04 |
|
| 615 |
+
| time/ | |
|
| 616 |
+
| fps | 145 |
|
| 617 |
+
| iterations | 29 |
|
| 618 |
+
| time_elapsed | 409 |
|
| 619 |
+
| total_timesteps | 59392 |
|
| 620 |
+
| train/ | |
|
| 621 |
+
| approx_kl | 0.0041839215 |
|
| 622 |
+
| clip_fraction | 0.0244 |
|
| 623 |
+
| clip_range | 0.2 |
|
| 624 |
+
| entropy_loss | -5.4 |
|
| 625 |
+
| explained_variance | 5.96e-08 |
|
| 626 |
+
| learning_rate | 0.0003 |
|
| 627 |
+
| loss | 1.79e+05 |
|
| 628 |
+
| n_updates | 280 |
|
| 629 |
+
| policy_gradient_loss | -0.00283 |
|
| 630 |
+
| std | 0.933 |
|
| 631 |
+
| value_loss | 3.86e+05 |
|
| 632 |
+
------------------------------------------
|
| 633 |
+
------------------------------------------
|
| 634 |
+
| rollout/ | |
|
| 635 |
+
| ep_len_mean | 1e+03 |
|
| 636 |
+
| ep_rew_mean | -2.8e+04 |
|
| 637 |
+
| time/ | |
|
| 638 |
+
| fps | 145 |
|
| 639 |
+
| iterations | 30 |
|
| 640 |
+
| time_elapsed | 421 |
|
| 641 |
+
| total_timesteps | 61440 |
|
| 642 |
+
| train/ | |
|
| 643 |
+
| approx_kl | 0.0053371564 |
|
| 644 |
+
| clip_fraction | 0.0308 |
|
| 645 |
+
| clip_range | 0.2 |
|
| 646 |
+
| entropy_loss | -5.35 |
|
| 647 |
+
| explained_variance | -1.19e-07 |
|
| 648 |
+
| learning_rate | 0.0003 |
|
| 649 |
+
| loss | 3.19e+04 |
|
| 650 |
+
| n_updates | 290 |
|
| 651 |
+
| policy_gradient_loss | -0.00282 |
|
| 652 |
+
| std | 0.915 |
|
| 653 |
+
| value_loss | 6.26e+04 |
|
| 654 |
+
------------------------------------------
|
| 655 |
+
------------------------------------------
|
| 656 |
+
| rollout/ | |
|
| 657 |
+
| ep_len_mean | 1e+03 |
|
| 658 |
+
| ep_rew_mean | -2.78e+04 |
|
| 659 |
+
| time/ | |
|
| 660 |
+
| fps | 146 |
|
| 661 |
+
| iterations | 31 |
|
| 662 |
+
| time_elapsed | 433 |
|
| 663 |
+
| total_timesteps | 63488 |
|
| 664 |
+
| train/ | |
|
| 665 |
+
| approx_kl | 0.0045930664 |
|
| 666 |
+
| clip_fraction | 0.0416 |
|
| 667 |
+
| clip_range | 0.2 |
|
| 668 |
+
| entropy_loss | -5.31 |
|
| 669 |
+
| explained_variance | 1.19e-07 |
|
| 670 |
+
| learning_rate | 0.0003 |
|
| 671 |
+
| loss | 1.66e+05 |
|
| 672 |
+
| n_updates | 300 |
|
| 673 |
+
| policy_gradient_loss | -0.00376 |
|
| 674 |
+
| std | 0.913 |
|
| 675 |
+
| value_loss | 3.15e+05 |
|
| 676 |
+
------------------------------------------
|
| 677 |
+
-----------------------------------------
|
| 678 |
+
| rollout/ | |
|
| 679 |
+
| ep_len_mean | 1e+03 |
|
| 680 |
+
| ep_rew_mean | -2.77e+04 |
|
| 681 |
+
| time/ | |
|
| 682 |
+
| fps | 147 |
|
| 683 |
+
| iterations | 32 |
|
| 684 |
+
| time_elapsed | 445 |
|
| 685 |
+
| total_timesteps | 65536 |
|
| 686 |
+
| train/ | |
|
| 687 |
+
| approx_kl | 0.006433362 |
|
| 688 |
+
| clip_fraction | 0.0423 |
|
| 689 |
+
| clip_range | 0.2 |
|
| 690 |
+
| entropy_loss | -5.29 |
|
| 691 |
+
| explained_variance | 0 |
|
| 692 |
+
| learning_rate | 0.0003 |
|
| 693 |
+
| loss | 7.14e+04 |
|
| 694 |
+
| n_updates | 310 |
|
| 695 |
+
| policy_gradient_loss | -0.00386 |
|
| 696 |
+
| std | 0.906 |
|
| 697 |
+
| value_loss | 1.45e+05 |
|
| 698 |
+
-----------------------------------------
|
| 699 |
+
------------------------------------------
|
| 700 |
+
| rollout/ | |
|
| 701 |
+
| ep_len_mean | 1e+03 |
|
| 702 |
+
| ep_rew_mean | -2.74e+04 |
|
| 703 |
+
| time/ | |
|
| 704 |
+
| fps | 147 |
|
| 705 |
+
| iterations | 33 |
|
| 706 |
+
| time_elapsed | 457 |
|
| 707 |
+
| total_timesteps | 67584 |
|
| 708 |
+
| train/ | |
|
| 709 |
+
| approx_kl | 0.0060111308 |
|
| 710 |
+
| clip_fraction | 0.0567 |
|
| 711 |
+
| clip_range | 0.2 |
|
| 712 |
+
| entropy_loss | -5.27 |
|
| 713 |
+
| explained_variance | 5.96e-08 |
|
| 714 |
+
| learning_rate | 0.0003 |
|
| 715 |
+
| loss | 8.6e+04 |
|
| 716 |
+
| n_updates | 320 |
|
| 717 |
+
| policy_gradient_loss | -0.004 |
|
| 718 |
+
| std | 0.904 |
|
| 719 |
+
| value_loss | 1.9e+05 |
|
| 720 |
+
------------------------------------------
|
| 721 |
+
-----------------------------------------
|
| 722 |
+
| rollout/ | |
|
| 723 |
+
| ep_len_mean | 1e+03 |
|
| 724 |
+
| ep_rew_mean | -2.71e+04 |
|
| 725 |
+
| time/ | |
|
| 726 |
+
| fps | 148 |
|
| 727 |
+
| iterations | 34 |
|
| 728 |
+
| time_elapsed | 469 |
|
| 729 |
+
| total_timesteps | 69632 |
|
| 730 |
+
| train/ | |
|
| 731 |
+
| approx_kl | 0.002752479 |
|
| 732 |
+
| clip_fraction | 0.0267 |
|
| 733 |
+
| clip_range | 0.2 |
|
| 734 |
+
| entropy_loss | -5.28 |
|
| 735 |
+
| explained_variance | -1.19e-07 |
|
| 736 |
+
| learning_rate | 0.0003 |
|
| 737 |
+
| loss | 3.72e+04 |
|
| 738 |
+
| n_updates | 330 |
|
| 739 |
+
| policy_gradient_loss | -0.000776 |
|
| 740 |
+
| std | 0.909 |
|
| 741 |
+
| value_loss | 5.9e+04 |
|
| 742 |
+
-----------------------------------------
|
| 743 |
+
-----------------------------------------
|
| 744 |
+
| rollout/ | |
|
| 745 |
+
| ep_len_mean | 1e+03 |
|
| 746 |
+
| ep_rew_mean | -2.7e+04 |
|
| 747 |
+
| time/ | |
|
| 748 |
+
| fps | 149 |
|
| 749 |
+
| iterations | 35 |
|
| 750 |
+
| time_elapsed | 480 |
|
| 751 |
+
| total_timesteps | 71680 |
|
| 752 |
+
| train/ | |
|
| 753 |
+
| approx_kl | 0.004144692 |
|
| 754 |
+
| clip_fraction | 0.0243 |
|
| 755 |
+
| clip_range | 0.2 |
|
| 756 |
+
| entropy_loss | -5.29 |
|
| 757 |
+
| explained_variance | 1.19e-07 |
|
| 758 |
+
| learning_rate | 0.0003 |
|
| 759 |
+
| loss | 4.48e+04 |
|
| 760 |
+
| n_updates | 340 |
|
| 761 |
+
| policy_gradient_loss | -0.00131 |
|
| 762 |
+
| std | 0.907 |
|
| 763 |
+
| value_loss | 1.06e+05 |
|
| 764 |
+
-----------------------------------------
|
| 765 |
+
------------------------------------------
|
| 766 |
+
| rollout/ | |
|
| 767 |
+
| ep_len_mean | 1e+03 |
|
| 768 |
+
| ep_rew_mean | -2.69e+04 |
|
| 769 |
+
| time/ | |
|
| 770 |
+
| fps | 149 |
|
| 771 |
+
| iterations | 36 |
|
| 772 |
+
| time_elapsed | 492 |
|
| 773 |
+
| total_timesteps | 73728 |
|
| 774 |
+
| train/ | |
|
| 775 |
+
| approx_kl | 0.0060227686 |
|
| 776 |
+
| clip_fraction | 0.043 |
|
| 777 |
+
| clip_range | 0.2 |
|
| 778 |
+
| entropy_loss | -5.28 |
|
| 779 |
+
| explained_variance | 1.19e-07 |
|
| 780 |
+
| learning_rate | 0.0003 |
|
| 781 |
+
| loss | 9.28e+04 |
|
| 782 |
+
| n_updates | 350 |
|
| 783 |
+
| policy_gradient_loss | -0.00293 |
|
| 784 |
+
| std | 0.903 |
|
| 785 |
+
| value_loss | 1.91e+05 |
|
| 786 |
+
------------------------------------------
|
| 787 |
+
-----------------------------------------
|
| 788 |
+
| rollout/ | |
|
| 789 |
+
| ep_len_mean | 1e+03 |
|
| 790 |
+
| ep_rew_mean | -2.68e+04 |
|
| 791 |
+
| time/ | |
|
| 792 |
+
| fps | 150 |
|
| 793 |
+
| iterations | 37 |
|
| 794 |
+
| time_elapsed | 504 |
|
| 795 |
+
| total_timesteps | 75776 |
|
| 796 |
+
| train/ | |
|
| 797 |
+
| approx_kl | 0.003745494 |
|
| 798 |
+
| clip_fraction | 0.0147 |
|
| 799 |
+
| clip_range | 0.2 |
|
| 800 |
+
| entropy_loss | -5.29 |
|
| 801 |
+
| explained_variance | -1.19e-07 |
|
| 802 |
+
| learning_rate | 0.0003 |
|
| 803 |
+
| loss | 6.81e+04 |
|
| 804 |
+
| n_updates | 360 |
|
| 805 |
+
| policy_gradient_loss | -0.00168 |
|
| 806 |
+
| std | 0.91 |
|
| 807 |
+
| value_loss | 1.31e+05 |
|
| 808 |
+
-----------------------------------------
|
| 809 |
+
------------------------------------------
|
| 810 |
+
| rollout/ | |
|
| 811 |
+
| ep_len_mean | 1e+03 |
|
| 812 |
+
| ep_rew_mean | -2.71e+04 |
|
| 813 |
+
| time/ | |
|
| 814 |
+
| fps | 150 |
|
| 815 |
+
| iterations | 38 |
|
| 816 |
+
| time_elapsed | 516 |
|
| 817 |
+
| total_timesteps | 77824 |
|
| 818 |
+
| train/ | |
|
| 819 |
+
| approx_kl | 0.0039524576 |
|
| 820 |
+
| clip_fraction | 0.0286 |
|
| 821 |
+
| clip_range | 0.2 |
|
| 822 |
+
| entropy_loss | -5.3 |
|
| 823 |
+
| explained_variance | 0 |
|
| 824 |
+
| learning_rate | 0.0003 |
|
| 825 |
+
| loss | 1.13e+05 |
|
| 826 |
+
| n_updates | 370 |
|
| 827 |
+
| policy_gradient_loss | -0.00305 |
|
| 828 |
+
| std | 0.909 |
|
| 829 |
+
| value_loss | 2.44e+05 |
|
| 830 |
+
------------------------------------------
|
| 831 |
+
-----------------------------------------
|
| 832 |
+
| rollout/ | |
|
| 833 |
+
| ep_len_mean | 1e+03 |
|
| 834 |
+
| ep_rew_mean | -2.67e+04 |
|
| 835 |
+
| time/ | |
|
| 836 |
+
| fps | 150 |
|
| 837 |
+
| iterations | 39 |
|
| 838 |
+
| time_elapsed | 529 |
|
| 839 |
+
| total_timesteps | 79872 |
|
| 840 |
+
| train/ | |
|
| 841 |
+
| approx_kl | 0.005160669 |
|
| 842 |
+
| clip_fraction | 0.0254 |
|
| 843 |
+
| clip_range | 0.2 |
|
| 844 |
+
| entropy_loss | -5.29 |
|
| 845 |
+
| explained_variance | 1.19e-07 |
|
| 846 |
+
| learning_rate | 0.0003 |
|
| 847 |
+
| loss | 1.6e+05 |
|
| 848 |
+
| n_updates | 380 |
|
| 849 |
+
| policy_gradient_loss | -0.00292 |
|
| 850 |
+
| std | 0.907 |
|
| 851 |
+
| value_loss | 2.55e+05 |
|
| 852 |
+
-----------------------------------------
|
| 853 |
+
------------------------------------------
|
| 854 |
+
| rollout/ | |
|
| 855 |
+
| ep_len_mean | 1e+03 |
|
| 856 |
+
| ep_rew_mean | -2.66e+04 |
|
| 857 |
+
| time/ | |
|
| 858 |
+
| fps | 151 |
|
| 859 |
+
| iterations | 40 |
|
| 860 |
+
| time_elapsed | 540 |
|
| 861 |
+
| total_timesteps | 81920 |
|
| 862 |
+
| train/ | |
|
| 863 |
+
| approx_kl | 0.0046265204 |
|
| 864 |
+
| clip_fraction | 0.0285 |
|
| 865 |
+
| clip_range | 0.2 |
|
| 866 |
+
| entropy_loss | -5.27 |
|
| 867 |
+
| explained_variance | 0 |
|
| 868 |
+
| learning_rate | 0.0003 |
|
| 869 |
+
| loss | 2.09e+04 |
|
| 870 |
+
| n_updates | 390 |
|
| 871 |
+
| policy_gradient_loss | -0.00145 |
|
| 872 |
+
| std | 0.902 |
|
| 873 |
+
| value_loss | 3.81e+04 |
|
| 874 |
+
------------------------------------------
|
| 875 |
+
------------------------------------------
|
| 876 |
+
| rollout/ | |
|
| 877 |
+
| ep_len_mean | 1e+03 |
|
| 878 |
+
| ep_rew_mean | -2.67e+04 |
|
| 879 |
+
| time/ | |
|
| 880 |
+
| fps | 151 |
|
| 881 |
+
| iterations | 41 |
|
| 882 |
+
| time_elapsed | 553 |
|
| 883 |
+
| total_timesteps | 83968 |
|
| 884 |
+
| train/ | |
|
| 885 |
+
| approx_kl | 0.0042863134 |
|
| 886 |
+
| clip_fraction | 0.0239 |
|
| 887 |
+
| clip_range | 0.2 |
|
| 888 |
+
| entropy_loss | -5.26 |
|
| 889 |
+
| explained_variance | 0 |
|
| 890 |
+
| learning_rate | 0.0003 |
|
| 891 |
+
| loss | 9.31e+04 |
|
| 892 |
+
| n_updates | 400 |
|
| 893 |
+
| policy_gradient_loss | -0.00143 |
|
| 894 |
+
| std | 0.9 |
|
| 895 |
+
| value_loss | 1.85e+05 |
|
| 896 |
+
------------------------------------------
|
| 897 |
+
-----------------------------------------
|
| 898 |
+
| rollout/ | |
|
| 899 |
+
| ep_len_mean | 1e+03 |
|
| 900 |
+
| ep_rew_mean | -2.68e+04 |
|
| 901 |
+
| time/ | |
|
| 902 |
+
| fps | 151 |
|
| 903 |
+
| iterations | 42 |
|
| 904 |
+
| time_elapsed | 566 |
|
| 905 |
+
| total_timesteps | 86016 |
|
| 906 |
+
| train/ | |
|
| 907 |
+
| approx_kl | 0.005065168 |
|
| 908 |
+
| clip_fraction | 0.0253 |
|
| 909 |
+
| clip_range | 0.2 |
|
| 910 |
+
| entropy_loss | -5.24 |
|
| 911 |
+
| explained_variance | 1.19e-07 |
|
| 912 |
+
| learning_rate | 0.0003 |
|
| 913 |
+
| loss | 1.1e+05 |
|
| 914 |
+
| n_updates | 410 |
|
| 915 |
+
| policy_gradient_loss | -0.0026 |
|
| 916 |
+
| std | 0.894 |
|
| 917 |
+
| value_loss | 2.2e+05 |
|
| 918 |
+
-----------------------------------------
|
| 919 |
+
------------------------------------------
|
| 920 |
+
| rollout/ | |
|
| 921 |
+
| ep_len_mean | 1e+03 |
|
| 922 |
+
| ep_rew_mean | -2.67e+04 |
|
| 923 |
+
| time/ | |
|
| 924 |
+
| fps | 152 |
|
| 925 |
+
| iterations | 43 |
|
| 926 |
+
| time_elapsed | 577 |
|
| 927 |
+
| total_timesteps | 88064 |
|
| 928 |
+
| train/ | |
|
| 929 |
+
| approx_kl | 0.0030657728 |
|
| 930 |
+
| clip_fraction | 0.0121 |
|
| 931 |
+
| clip_range | 0.2 |
|
| 932 |
+
| entropy_loss | -5.22 |
|
| 933 |
+
| explained_variance | 0 |
|
| 934 |
+
| learning_rate | 0.0003 |
|
| 935 |
+
| loss | 1.52e+05 |
|
| 936 |
+
| n_updates | 420 |
|
| 937 |
+
| policy_gradient_loss | -0.00173 |
|
| 938 |
+
| std | 0.892 |
|
| 939 |
+
| value_loss | 3.03e+05 |
|
| 940 |
+
------------------------------------------
|
| 941 |
+
------------------------------------------
|
| 942 |
+
| rollout/ | |
|
| 943 |
+
| ep_len_mean | 1e+03 |
|
| 944 |
+
| ep_rew_mean | -2.73e+04 |
|
| 945 |
+
| time/ | |
|
| 946 |
+
| fps | 153 |
|
| 947 |
+
| iterations | 44 |
|
| 948 |
+
| time_elapsed | 588 |
|
| 949 |
+
| total_timesteps | 90112 |
|
| 950 |
+
| train/ | |
|
| 951 |
+
| approx_kl | 0.0051104454 |
|
| 952 |
+
| clip_fraction | 0.0357 |
|
| 953 |
+
| clip_range | 0.2 |
|
| 954 |
+
| entropy_loss | -5.22 |
|
| 955 |
+
| explained_variance | -1.19e-07 |
|
| 956 |
+
| learning_rate | 0.0003 |
|
| 957 |
+
| loss | 8.25e+04 |
|
| 958 |
+
| n_updates | 430 |
|
| 959 |
+
| policy_gradient_loss | -0.00201 |
|
| 960 |
+
| std | 0.893 |
|
| 961 |
+
| value_loss | 1.81e+05 |
|
| 962 |
+
------------------------------------------
|
| 963 |
+
------------------------------------------
|
| 964 |
+
| rollout/ | |
|
| 965 |
+
| ep_len_mean | 1e+03 |
|
| 966 |
+
| ep_rew_mean | -2.8e+04 |
|
| 967 |
+
| time/ | |
|
| 968 |
+
| fps | 153 |
|
| 969 |
+
| iterations | 45 |
|
| 970 |
+
| time_elapsed | 600 |
|
| 971 |
+
| total_timesteps | 92160 |
|
| 972 |
+
| train/ | |
|
| 973 |
+
| approx_kl | 0.0051720007 |
|
| 974 |
+
| clip_fraction | 0.033 |
|
| 975 |
+
| clip_range | 0.2 |
|
| 976 |
+
| entropy_loss | -5.23 |
|
| 977 |
+
| explained_variance | 0 |
|
| 978 |
+
| learning_rate | 0.0003 |
|
| 979 |
+
| loss | 3.47e+05 |
|
| 980 |
+
| n_updates | 440 |
|
| 981 |
+
| policy_gradient_loss | -0.00428 |
|
| 982 |
+
| std | 0.896 |
|
| 983 |
+
| value_loss | 7.59e+05 |
|
| 984 |
+
------------------------------------------
|
| 985 |
+
-----------------------------------------
|
| 986 |
+
| rollout/ | |
|
| 987 |
+
| ep_len_mean | 1e+03 |
|
| 988 |
+
| ep_rew_mean | -2.9e+04 |
|
| 989 |
+
| time/ | |
|
| 990 |
+
| fps | 153 |
|
| 991 |
+
| iterations | 46 |
|
| 992 |
+
| time_elapsed | 612 |
|
| 993 |
+
| total_timesteps | 94208 |
|
| 994 |
+
| train/ | |
|
| 995 |
+
| approx_kl | 0.004487371 |
|
| 996 |
+
| clip_fraction | 0.0192 |
|
| 997 |
+
| clip_range | 0.2 |
|
| 998 |
+
| entropy_loss | -5.23 |
|
| 999 |
+
| explained_variance | 5.96e-08 |
|
| 1000 |
+
| learning_rate | 0.0003 |
|
| 1001 |
+
| loss | 6.84e+05 |
|
| 1002 |
+
| n_updates | 450 |
|
| 1003 |
+
| policy_gradient_loss | -0.00245 |
|
| 1004 |
+
| std | 0.895 |
|
| 1005 |
+
| value_loss | 1.3e+06 |
|
| 1006 |
+
-----------------------------------------
|
| 1007 |
+
-----------------------------------------
|
| 1008 |
+
| rollout/ | |
|
| 1009 |
+
| ep_len_mean | 1e+03 |
|
| 1010 |
+
| ep_rew_mean | -2.98e+04 |
|
| 1011 |
+
| time/ | |
|
| 1012 |
+
| fps | 153 |
|
| 1013 |
+
| iterations | 47 |
|
| 1014 |
+
| time_elapsed | 625 |
|
| 1015 |
+
| total_timesteps | 96256 |
|
| 1016 |
+
| train/ | |
|
| 1017 |
+
| approx_kl | 0.005325151 |
|
| 1018 |
+
| clip_fraction | 0.0271 |
|
| 1019 |
+
| clip_range | 0.2 |
|
| 1020 |
+
| entropy_loss | -5.22 |
|
| 1021 |
+
| explained_variance | 0 |
|
| 1022 |
+
| learning_rate | 0.0003 |
|
| 1023 |
+
| loss | 9.17e+05 |
|
| 1024 |
+
| n_updates | 460 |
|
| 1025 |
+
| policy_gradient_loss | -0.00341 |
|
| 1026 |
+
| std | 0.892 |
|
| 1027 |
+
| value_loss | 1.81e+06 |
|
| 1028 |
+
-----------------------------------------
|
| 1029 |
+
-----------------------------------------
|
| 1030 |
+
| rollout/ | |
|
| 1031 |
+
| ep_len_mean | 1e+03 |
|
| 1032 |
+
| ep_rew_mean | -3.08e+04 |
|
| 1033 |
+
| time/ | |
|
| 1034 |
+
| fps | 154 |
|
| 1035 |
+
| iterations | 48 |
|
| 1036 |
+
| time_elapsed | 637 |
|
| 1037 |
+
| total_timesteps | 98304 |
|
| 1038 |
+
| train/ | |
|
| 1039 |
+
| approx_kl | 0.004731435 |
|
| 1040 |
+
| clip_fraction | 0.0245 |
|
| 1041 |
+
| clip_range | 0.2 |
|
| 1042 |
+
| entropy_loss | -5.21 |
|
| 1043 |
+
| explained_variance | 0 |
|
| 1044 |
+
| learning_rate | 0.0003 |
|
| 1045 |
+
| loss | 8.34e+05 |
|
| 1046 |
+
| n_updates | 470 |
|
| 1047 |
+
| policy_gradient_loss | -0.00319 |
|
| 1048 |
+
| std | 0.888 |
|
| 1049 |
+
| value_loss | 1.53e+06 |
|
| 1050 |
+
-----------------------------------------
|
| 1051 |
+
----------------------------------------
|
| 1052 |
+
| rollout/ | |
|
| 1053 |
+
| ep_len_mean | 1e+03 |
|
| 1054 |
+
| ep_rew_mean | -3.21e+04 |
|
| 1055 |
+
| time/ | |
|
| 1056 |
+
| fps | 154 |
|
| 1057 |
+
| iterations | 49 |
|
| 1058 |
+
| time_elapsed | 648 |
|
| 1059 |
+
| total_timesteps | 100352 |
|
| 1060 |
+
| train/ | |
|
| 1061 |
+
| approx_kl | 0.00386829 |
|
| 1062 |
+
| clip_fraction | 0.00859 |
|
| 1063 |
+
| clip_range | 0.2 |
|
| 1064 |
+
| entropy_loss | -5.2 |
|
| 1065 |
+
| explained_variance | -1.19e-07 |
|
| 1066 |
+
| learning_rate | 0.0003 |
|
| 1067 |
+
| loss | 1.05e+06 |
|
| 1068 |
+
| n_updates | 480 |
|
| 1069 |
+
| policy_gradient_loss | -0.00151 |
|
| 1070 |
+
| std | 0.887 |
|
| 1071 |
+
| value_loss | 2.04e+06 |
|
| 1072 |
+
----------------------------------------
|
| 1073 |
+
-----------------------------------------
|
| 1074 |
+
| rollout/ | |
|
| 1075 |
+
| ep_len_mean | 1e+03 |
|
| 1076 |
+
| ep_rew_mean | -3.34e+04 |
|
| 1077 |
+
| time/ | |
|
| 1078 |
+
| fps | 154 |
|
| 1079 |
+
| iterations | 50 |
|
| 1080 |
+
| time_elapsed | 660 |
|
| 1081 |
+
| total_timesteps | 102400 |
|
| 1082 |
+
| train/ | |
|
| 1083 |
+
| approx_kl | 0.005242249 |
|
| 1084 |
+
| clip_fraction | 0.0372 |
|
| 1085 |
+
| clip_range | 0.2 |
|
| 1086 |
+
| entropy_loss | -5.2 |
|
| 1087 |
+
| explained_variance | -1.19e-07 |
|
| 1088 |
+
| learning_rate | 0.0003 |
|
| 1089 |
+
| loss | 1.95e+06 |
|
| 1090 |
+
| n_updates | 490 |
|
| 1091 |
+
| policy_gradient_loss | -0.00506 |
|
| 1092 |
+
| std | 0.889 |
|
| 1093 |
+
| value_loss | 3.17e+06 |
|
| 1094 |
+
-----------------------------------------
|
| 1095 |
+
-----------------------------------------
|
| 1096 |
+
| rollout/ | |
|
| 1097 |
+
| ep_len_mean | 1e+03 |
|
| 1098 |
+
| ep_rew_mean | -3.5e+04 |
|
| 1099 |
+
| time/ | |
|
| 1100 |
+
| fps | 155 |
|
| 1101 |
+
| iterations | 51 |
|
| 1102 |
+
| time_elapsed | 673 |
|
| 1103 |
+
| total_timesteps | 104448 |
|
| 1104 |
+
| train/ | |
|
| 1105 |
+
| approx_kl | 0.003204999 |
|
| 1106 |
+
| clip_fraction | 0.00566 |
|
| 1107 |
+
| clip_range | 0.2 |
|
| 1108 |
+
| entropy_loss | -5.21 |
|
| 1109 |
+
| explained_variance | 5.96e-08 |
|
| 1110 |
+
| learning_rate | 0.0003 |
|
| 1111 |
+
| loss | 1.29e+06 |
|
| 1112 |
+
| n_updates | 500 |
|
| 1113 |
+
| policy_gradient_loss | -0.000996 |
|
| 1114 |
+
| std | 0.89 |
|
| 1115 |
+
| value_loss | 3.12e+06 |
|
| 1116 |
+
-----------------------------------------
|
| 1117 |
+
------------------------------------------
|
| 1118 |
+
| rollout/ | |
|
| 1119 |
+
| ep_len_mean | 1e+03 |
|
| 1120 |
+
| ep_rew_mean | -3.71e+04 |
|
| 1121 |
+
| time/ | |
|
| 1122 |
+
| fps | 155 |
|
| 1123 |
+
| iterations | 52 |
|
| 1124 |
+
| time_elapsed | 685 |
|
| 1125 |
+
| total_timesteps | 106496 |
|
| 1126 |
+
| train/ | |
|
| 1127 |
+
| approx_kl | 0.0037713286 |
|
| 1128 |
+
| clip_fraction | 0.0106 |
|
| 1129 |
+
| clip_range | 0.2 |
|
| 1130 |
+
| entropy_loss | -5.2 |
|
| 1131 |
+
| explained_variance | 5.96e-08 |
|
| 1132 |
+
| learning_rate | 0.0003 |
|
| 1133 |
+
| loss | 1.73e+06 |
|
| 1134 |
+
| n_updates | 510 |
|
| 1135 |
+
| policy_gradient_loss | -0.00205 |
|
| 1136 |
+
| std | 0.889 |
|
| 1137 |
+
| value_loss | 3.39e+06 |
|
| 1138 |
+
------------------------------------------
|
| 1139 |
+
-----------------------------------------
|
| 1140 |
+
| rollout/ | |
|
| 1141 |
+
| ep_len_mean | 1e+03 |
|
| 1142 |
+
| ep_rew_mean | -3.89e+04 |
|
| 1143 |
+
| time/ | |
|
| 1144 |
+
| fps | 155 |
|
| 1145 |
+
| iterations | 53 |
|
| 1146 |
+
| time_elapsed | 699 |
|
| 1147 |
+
| total_timesteps | 108544 |
|
| 1148 |
+
| train/ | |
|
| 1149 |
+
| approx_kl | 0.003621605 |
|
| 1150 |
+
| clip_fraction | 0.00576 |
|
| 1151 |
+
| clip_range | 0.2 |
|
| 1152 |
+
| entropy_loss | -5.2 |
|
| 1153 |
+
| explained_variance | 0 |
|
| 1154 |
+
| learning_rate | 0.0003 |
|
| 1155 |
+
| loss | 2.11e+06 |
|
| 1156 |
+
| n_updates | 520 |
|
| 1157 |
+
| policy_gradient_loss | -0.00108 |
|
| 1158 |
+
| std | 0.889 |
|
| 1159 |
+
| value_loss | 5.23e+06 |
|
| 1160 |
+
-----------------------------------------
|
| 1161 |
+
------------------------------------------
|
| 1162 |
+
| rollout/ | |
|
| 1163 |
+
| ep_len_mean | 1e+03 |
|
| 1164 |
+
| ep_rew_mean | -4.07e+04 |
|
| 1165 |
+
| time/ | |
|
| 1166 |
+
| fps | 155 |
|
| 1167 |
+
| iterations | 54 |
|
| 1168 |
+
| time_elapsed | 710 |
|
| 1169 |
+
| total_timesteps | 110592 |
|
| 1170 |
+
| train/ | |
|
| 1171 |
+
| approx_kl | 0.0037987605 |
|
| 1172 |
+
| clip_fraction | 0.0108 |
|
| 1173 |
+
| clip_range | 0.2 |
|
| 1174 |
+
| entropy_loss | -5.21 |
|
| 1175 |
+
| explained_variance | 0 |
|
| 1176 |
+
| learning_rate | 0.0003 |
|
| 1177 |
+
| loss | 2.02e+06 |
|
| 1178 |
+
| n_updates | 530 |
|
| 1179 |
+
| policy_gradient_loss | -0.00193 |
|
| 1180 |
+
| std | 0.891 |
|
| 1181 |
+
| value_loss | 4.78e+06 |
|
| 1182 |
+
------------------------------------------
|
| 1183 |
+
------------------------------------------
|
| 1184 |
+
| rollout/ | |
|
| 1185 |
+
| ep_len_mean | 1e+03 |
|
| 1186 |
+
| ep_rew_mean | -4.31e+04 |
|
| 1187 |
+
| time/ | |
|
| 1188 |
+
| fps | 155 |
|
| 1189 |
+
| iterations | 55 |
|
| 1190 |
+
| time_elapsed | 722 |
|
| 1191 |
+
| total_timesteps | 112640 |
|
| 1192 |
+
| train/ | |
|
| 1193 |
+
| approx_kl | 0.0041659893 |
|
| 1194 |
+
| clip_fraction | 0.00801 |
|
| 1195 |
+
| clip_range | 0.2 |
|
| 1196 |
+
| entropy_loss | -5.21 |
|
| 1197 |
+
| explained_variance | 0 |
|
| 1198 |
+
| learning_rate | 0.0003 |
|
| 1199 |
+
| loss | 2.27e+06 |
|
| 1200 |
+
| n_updates | 540 |
|
| 1201 |
+
| policy_gradient_loss | -0.00111 |
|
| 1202 |
+
| std | 0.89 |
|
| 1203 |
+
| value_loss | 5e+06 |
|
| 1204 |
+
------------------------------------------
|
| 1205 |
+
------------------------------------------
|
| 1206 |
+
| rollout/ | |
|
| 1207 |
+
| ep_len_mean | 1e+03 |
|
| 1208 |
+
| ep_rew_mean | -4.52e+04 |
|
| 1209 |
+
| time/ | |
|
| 1210 |
+
| fps | 156 |
|
| 1211 |
+
| iterations | 56 |
|
| 1212 |
+
| time_elapsed | 734 |
|
| 1213 |
+
| total_timesteps | 114688 |
|
| 1214 |
+
| train/ | |
|
| 1215 |
+
| approx_kl | 0.0052787326 |
|
| 1216 |
+
| clip_fraction | 0.0296 |
|
| 1217 |
+
| clip_range | 0.2 |
|
| 1218 |
+
| entropy_loss | -5.21 |
|
| 1219 |
+
| explained_variance | 5.96e-08 |
|
| 1220 |
+
| learning_rate | 0.0003 |
|
| 1221 |
+
| loss | 3.7e+06 |
|
| 1222 |
+
| n_updates | 550 |
|
| 1223 |
+
| policy_gradient_loss | -0.00406 |
|
| 1224 |
+
| std | 0.889 |
|
| 1225 |
+
| value_loss | 6.79e+06 |
|
| 1226 |
+
------------------------------------------
|
| 1227 |
+
-----------------------------------------
|
| 1228 |
+
| rollout/ | |
|
| 1229 |
+
| ep_len_mean | 1e+03 |
|
| 1230 |
+
| ep_rew_mean | -4.67e+04 |
|
| 1231 |
+
| time/ | |
|
| 1232 |
+
| fps | 156 |
|
| 1233 |
+
| iterations | 57 |
|
| 1234 |
+
| time_elapsed | 746 |
|
| 1235 |
+
| total_timesteps | 116736 |
|
| 1236 |
+
| train/ | |
|
| 1237 |
+
| approx_kl | 0.004433933 |
|
| 1238 |
+
| clip_fraction | 0.0184 |
|
| 1239 |
+
| clip_range | 0.2 |
|
| 1240 |
+
| entropy_loss | -5.2 |
|
| 1241 |
+
| explained_variance | 5.96e-08 |
|
| 1242 |
+
| learning_rate | 0.0003 |
|
| 1243 |
+
| loss | 2.93e+06 |
|
| 1244 |
+
| n_updates | 560 |
|
| 1245 |
+
| policy_gradient_loss | -0.00222 |
|
| 1246 |
+
| std | 0.888 |
|
| 1247 |
+
| value_loss | 6.15e+06 |
|
| 1248 |
+
-----------------------------------------
|
| 1249 |
+
-----------------------------------------
|
| 1250 |
+
| rollout/ | |
|
| 1251 |
+
| ep_len_mean | 1e+03 |
|
| 1252 |
+
| ep_rew_mean | -4.83e+04 |
|
| 1253 |
+
| time/ | |
|
| 1254 |
+
| fps | 156 |
|
| 1255 |
+
| iterations | 58 |
|
| 1256 |
+
| time_elapsed | 758 |
|
| 1257 |
+
| total_timesteps | 118784 |
|
| 1258 |
+
| train/ | |
|
| 1259 |
+
| approx_kl | 0.004643922 |
|
| 1260 |
+
| clip_fraction | 0.025 |
|
| 1261 |
+
| clip_range | 0.2 |
|
| 1262 |
+
| entropy_loss | -5.2 |
|
| 1263 |
+
| explained_variance | 0 |
|
| 1264 |
+
| learning_rate | 0.0003 |
|
| 1265 |
+
| loss | 2.54e+06 |
|
| 1266 |
+
| n_updates | 570 |
|
| 1267 |
+
| policy_gradient_loss | -0.00334 |
|
| 1268 |
+
| std | 0.888 |
|
| 1269 |
+
| value_loss | 4.8e+06 |
|
| 1270 |
+
-----------------------------------------
|
| 1271 |
+
-----------------------------------------
|
| 1272 |
+
| rollout/ | |
|
| 1273 |
+
| ep_len_mean | 1e+03 |
|
| 1274 |
+
| ep_rew_mean | -4.99e+04 |
|
| 1275 |
+
| time/ | |
|
| 1276 |
+
| fps | 156 |
|
| 1277 |
+
| iterations | 59 |
|
| 1278 |
+
| time_elapsed | 769 |
|
| 1279 |
+
| total_timesteps | 120832 |
|
| 1280 |
+
| train/ | |
|
| 1281 |
+
| approx_kl | 0.004279623 |
|
| 1282 |
+
| clip_fraction | 0.00815 |
|
| 1283 |
+
| clip_range | 0.2 |
|
| 1284 |
+
| entropy_loss | -5.2 |
|
| 1285 |
+
| explained_variance | 1.19e-07 |
|
| 1286 |
+
| learning_rate | 0.0003 |
|
| 1287 |
+
| loss | 2.58e+06 |
|
| 1288 |
+
| n_updates | 580 |
|
| 1289 |
+
| policy_gradient_loss | -0.00133 |
|
| 1290 |
+
| std | 0.89 |
|
| 1291 |
+
| value_loss | 4.7e+06 |
|
| 1292 |
+
-----------------------------------------
|
| 1293 |
+
-----------------------------------------
|
| 1294 |
+
| rollout/ | |
|
| 1295 |
+
| ep_len_mean | 1e+03 |
|
| 1296 |
+
| ep_rew_mean | -5.18e+04 |
|
| 1297 |
+
| time/ | |
|
| 1298 |
+
| fps | 157 |
|
| 1299 |
+
| iterations | 60 |
|
| 1300 |
+
| time_elapsed | 782 |
|
| 1301 |
+
| total_timesteps | 122880 |
|
| 1302 |
+
| train/ | |
|
| 1303 |
+
| approx_kl | 0.004928913 |
|
| 1304 |
+
| clip_fraction | 0.0252 |
|
| 1305 |
+
| clip_range | 0.2 |
|
| 1306 |
+
| entropy_loss | -5.2 |
|
| 1307 |
+
| explained_variance | 1.19e-07 |
|
| 1308 |
+
| learning_rate | 0.0003 |
|
| 1309 |
+
| loss | 2.06e+06 |
|
| 1310 |
+
| n_updates | 590 |
|
| 1311 |
+
| policy_gradient_loss | -0.00314 |
|
| 1312 |
+
| std | 0.886 |
|
| 1313 |
+
| value_loss | 4.64e+06 |
|
| 1314 |
+
-----------------------------------------
|
| 1315 |
+
------------------------------------------
|
| 1316 |
+
| rollout/ | |
|
| 1317 |
+
| ep_len_mean | 1e+03 |
|
| 1318 |
+
| ep_rew_mean | -5.37e+04 |
|
| 1319 |
+
| time/ | |
|
| 1320 |
+
| fps | 157 |
|
| 1321 |
+
| iterations | 61 |
|
| 1322 |
+
| time_elapsed | 793 |
|
| 1323 |
+
| total_timesteps | 124928 |
|
| 1324 |
+
| train/ | |
|
| 1325 |
+
| approx_kl | 0.0044577485 |
|
| 1326 |
+
| clip_fraction | 0.0167 |
|
| 1327 |
+
| clip_range | 0.2 |
|
| 1328 |
+
| entropy_loss | -5.19 |
|
| 1329 |
+
| explained_variance | 0 |
|
| 1330 |
+
| learning_rate | 0.0003 |
|
| 1331 |
+
| loss | 2.75e+06 |
|
| 1332 |
+
| n_updates | 600 |
|
| 1333 |
+
| policy_gradient_loss | -0.00216 |
|
| 1334 |
+
| std | 0.884 |
|
| 1335 |
+
| value_loss | 5.37e+06 |
|
| 1336 |
+
------------------------------------------
|
| 1337 |
+
------------------------------------------
|
| 1338 |
+
| rollout/ | |
|
| 1339 |
+
| ep_len_mean | 1e+03 |
|
| 1340 |
+
| ep_rew_mean | -5.54e+04 |
|
| 1341 |
+
| time/ | |
|
| 1342 |
+
| fps | 157 |
|
| 1343 |
+
| iterations | 62 |
|
| 1344 |
+
| time_elapsed | 805 |
|
| 1345 |
+
| total_timesteps | 126976 |
|
| 1346 |
+
| train/ | |
|
| 1347 |
+
| approx_kl | 0.0033779903 |
|
| 1348 |
+
| clip_fraction | 0.00854 |
|
| 1349 |
+
| clip_range | 0.2 |
|
| 1350 |
+
| entropy_loss | -5.19 |
|
| 1351 |
+
| explained_variance | 0 |
|
| 1352 |
+
| learning_rate | 0.0003 |
|
| 1353 |
+
| loss | 2.75e+06 |
|
| 1354 |
+
| n_updates | 610 |
|
| 1355 |
+
| policy_gradient_loss | -0.000881 |
|
| 1356 |
+
| std | 0.887 |
|
| 1357 |
+
| value_loss | 5.11e+06 |
|
| 1358 |
+
------------------------------------------
|
| 1359 |
+
------------------------------------------
|
| 1360 |
+
| rollout/ | |
|
| 1361 |
+
| ep_len_mean | 1e+03 |
|
| 1362 |
+
| ep_rew_mean | -5.84e+04 |
|
| 1363 |
+
| time/ | |
|
| 1364 |
+
| fps | 157 |
|
| 1365 |
+
| iterations | 63 |
|
| 1366 |
+
| time_elapsed | 817 |
|
| 1367 |
+
| total_timesteps | 129024 |
|
| 1368 |
+
| train/ | |
|
| 1369 |
+
| approx_kl | 0.0036522774 |
|
| 1370 |
+
| clip_fraction | 0.00669 |
|
| 1371 |
+
| clip_range | 0.2 |
|
| 1372 |
+
| entropy_loss | -5.2 |
|
| 1373 |
+
| explained_variance | 0 |
|
| 1374 |
+
| learning_rate | 0.0003 |
|
| 1375 |
+
| loss | 2.36e+06 |
|
| 1376 |
+
| n_updates | 620 |
|
| 1377 |
+
| policy_gradient_loss | -0.00121 |
|
| 1378 |
+
| std | 0.889 |
|
| 1379 |
+
| value_loss | 5.49e+06 |
|
| 1380 |
+
------------------------------------------
|
| 1381 |
+
-----------------------------------------
|
| 1382 |
+
| rollout/ | |
|
| 1383 |
+
| ep_len_mean | 1e+03 |
|
| 1384 |
+
| ep_rew_mean | -6.02e+04 |
|
| 1385 |
+
| time/ | |
|
| 1386 |
+
| fps | 158 |
|
| 1387 |
+
| iterations | 64 |
|
| 1388 |
+
| time_elapsed | 829 |
|
| 1389 |
+
| total_timesteps | 131072 |
|
| 1390 |
+
| train/ | |
|
| 1391 |
+
| approx_kl | 0.005089692 |
|
| 1392 |
+
| clip_fraction | 0.0314 |
|
| 1393 |
+
| clip_range | 0.2 |
|
| 1394 |
+
| entropy_loss | -5.2 |
|
| 1395 |
+
| explained_variance | -1.19e-07 |
|
| 1396 |
+
| learning_rate | 0.0003 |
|
| 1397 |
+
| loss | 2.64e+06 |
|
| 1398 |
+
| n_updates | 630 |
|
| 1399 |
+
| policy_gradient_loss | -0.00405 |
|
| 1400 |
+
| std | 0.887 |
|
| 1401 |
+
| value_loss | 5.26e+06 |
|
| 1402 |
+
-----------------------------------------
|
| 1403 |
+
-----------------------------------------
|
| 1404 |
+
| rollout/ | |
|
| 1405 |
+
| ep_len_mean | 1e+03 |
|
| 1406 |
+
| ep_rew_mean | -6.17e+04 |
|
| 1407 |
+
| time/ | |
|
| 1408 |
+
| fps | 158 |
|
| 1409 |
+
| iterations | 65 |
|
| 1410 |
+
| time_elapsed | 841 |
|
| 1411 |
+
| total_timesteps | 133120 |
|
| 1412 |
+
| train/ | |
|
| 1413 |
+
| approx_kl | 0.004889611 |
|
| 1414 |
+
| clip_fraction | 0.0176 |
|
| 1415 |
+
| clip_range | 0.2 |
|
| 1416 |
+
| entropy_loss | -5.2 |
|
| 1417 |
+
| explained_variance | 1.19e-07 |
|
| 1418 |
+
| learning_rate | 0.0003 |
|
| 1419 |
+
| loss | 1.92e+06 |
|
| 1420 |
+
| n_updates | 640 |
|
| 1421 |
+
| policy_gradient_loss | -0.00247 |
|
| 1422 |
+
| std | 0.888 |
|
| 1423 |
+
| value_loss | 4.65e+06 |
|
| 1424 |
+
-----------------------------------------
|
| 1425 |
+
-----------------------------------------
|
| 1426 |
+
| rollout/ | |
|
| 1427 |
+
| ep_len_mean | 1e+03 |
|
| 1428 |
+
| ep_rew_mean | -6.28e+04 |
|
| 1429 |
+
| time/ | |
|
| 1430 |
+
| fps | 158 |
|
| 1431 |
+
| iterations | 66 |
|
| 1432 |
+
| time_elapsed | 853 |
|
| 1433 |
+
| total_timesteps | 135168 |
|
| 1434 |
+
| train/ | |
|
| 1435 |
+
| approx_kl | 0.004658374 |
|
| 1436 |
+
| clip_fraction | 0.021 |
|
| 1437 |
+
| clip_range | 0.2 |
|
| 1438 |
+
| entropy_loss | -5.19 |
|
| 1439 |
+
| explained_variance | 0 |
|
| 1440 |
+
| learning_rate | 0.0003 |
|
| 1441 |
+
| loss | 1.67e+06 |
|
| 1442 |
+
| n_updates | 650 |
|
| 1443 |
+
| policy_gradient_loss | -0.00222 |
|
| 1444 |
+
| std | 0.886 |
|
| 1445 |
+
| value_loss | 3.85e+06 |
|
| 1446 |
+
-----------------------------------------
|
| 1447 |
+
----------------------------------------
|
| 1448 |
+
| rollout/ | |
|
| 1449 |
+
| ep_len_mean | 1e+03 |
|
| 1450 |
+
| ep_rew_mean | -6.38e+04 |
|
| 1451 |
+
| time/ | |
|
| 1452 |
+
| fps | 158 |
|
| 1453 |
+
| iterations | 67 |
|
| 1454 |
+
| time_elapsed | 866 |
|
| 1455 |
+
| total_timesteps | 137216 |
|
| 1456 |
+
| train/ | |
|
| 1457 |
+
| approx_kl | 0.00569404 |
|
| 1458 |
+
| clip_fraction | 0.0394 |
|
| 1459 |
+
| clip_range | 0.2 |
|
| 1460 |
+
| entropy_loss | -5.19 |
|
| 1461 |
+
| explained_variance | 5.96e-08 |
|
| 1462 |
+
| learning_rate | 0.0003 |
|
| 1463 |
+
| loss | 1.44e+06 |
|
| 1464 |
+
| n_updates | 660 |
|
| 1465 |
+
| policy_gradient_loss | -0.00414 |
|
| 1466 |
+
| std | 0.887 |
|
| 1467 |
+
| value_loss | 2.78e+06 |
|
| 1468 |
+
----------------------------------------
|
| 1469 |
+
-----------------------------------------
|
| 1470 |
+
| rollout/ | |
|
| 1471 |
+
| ep_len_mean | 1e+03 |
|
| 1472 |
+
| ep_rew_mean | -6.47e+04 |
|
| 1473 |
+
| time/ | |
|
| 1474 |
+
| fps | 157 |
|
| 1475 |
+
| iterations | 68 |
|
| 1476 |
+
| time_elapsed | 887 |
|
| 1477 |
+
| total_timesteps | 139264 |
|
| 1478 |
+
| train/ | |
|
| 1479 |
+
| approx_kl | 0.004979359 |
|
| 1480 |
+
| clip_fraction | 0.0314 |
|
| 1481 |
+
| clip_range | 0.2 |
|
| 1482 |
+
| entropy_loss | -5.19 |
|
| 1483 |
+
| explained_variance | 0 |
|
| 1484 |
+
| learning_rate | 0.0003 |
|
| 1485 |
+
| loss | 7.57e+05 |
|
| 1486 |
+
| n_updates | 670 |
|
| 1487 |
+
| policy_gradient_loss | -0.00347 |
|
| 1488 |
+
| std | 0.883 |
|
| 1489 |
+
| value_loss | 1.69e+06 |
|
| 1490 |
+
-----------------------------------------
|
| 1491 |
+
-----------------------------------------
|
| 1492 |
+
| rollout/ | |
|
| 1493 |
+
| ep_len_mean | 1e+03 |
|
| 1494 |
+
| ep_rew_mean | -6.6e+04 |
|
| 1495 |
+
| time/ | |
|
| 1496 |
+
| fps | 155 |
|
| 1497 |
+
| iterations | 69 |
|
| 1498 |
+
| time_elapsed | 911 |
|
| 1499 |
+
| total_timesteps | 141312 |
|
| 1500 |
+
| train/ | |
|
| 1501 |
+
| approx_kl | 0.003934146 |
|
| 1502 |
+
| clip_fraction | 0.0181 |
|
| 1503 |
+
| clip_range | 0.2 |
|
| 1504 |
+
| entropy_loss | -5.17 |
|
| 1505 |
+
| explained_variance | 0 |
|
| 1506 |
+
| learning_rate | 0.0003 |
|
| 1507 |
+
| loss | 8.59e+05 |
|
| 1508 |
+
| n_updates | 680 |
|
| 1509 |
+
| policy_gradient_loss | -0.002 |
|
| 1510 |
+
| std | 0.882 |
|
| 1511 |
+
| value_loss | 1.66e+06 |
|
| 1512 |
+
-----------------------------------------
|
| 1513 |
+
----------------------------------------
|
| 1514 |
+
| rollout/ | |
|
| 1515 |
+
| ep_len_mean | 1e+03 |
|
| 1516 |
+
| ep_rew_mean | -6.73e+04 |
|
| 1517 |
+
| time/ | |
|
| 1518 |
+
| fps | 154 |
|
| 1519 |
+
| iterations | 70 |
|
| 1520 |
+
| time_elapsed | 929 |
|
| 1521 |
+
| total_timesteps | 143360 |
|
| 1522 |
+
| train/ | |
|
| 1523 |
+
| approx_kl | 0.00488944 |
|
| 1524 |
+
| clip_fraction | 0.0386 |
|
| 1525 |
+
| clip_range | 0.2 |
|
| 1526 |
+
| entropy_loss | -5.17 |
|
| 1527 |
+
| explained_variance | 0 |
|
| 1528 |
+
| learning_rate | 0.0003 |
|
| 1529 |
+
| loss | 1.07e+06 |
|
| 1530 |
+
| n_updates | 690 |
|
| 1531 |
+
| policy_gradient_loss | -0.00419 |
|
| 1532 |
+
| std | 0.879 |
|
| 1533 |
+
| value_loss | 2.26e+06 |
|
| 1534 |
+
----------------------------------------
|
| 1535 |
+
------------------------------------------
|
| 1536 |
+
| rollout/ | |
|
| 1537 |
+
| ep_len_mean | 1e+03 |
|
| 1538 |
+
| ep_rew_mean | -6.88e+04 |
|
| 1539 |
+
| time/ | |
|
| 1540 |
+
| fps | 151 |
|
| 1541 |
+
| iterations | 71 |
|
| 1542 |
+
| time_elapsed | 956 |
|
| 1543 |
+
| total_timesteps | 145408 |
|
| 1544 |
+
| train/ | |
|
| 1545 |
+
| approx_kl | 0.0039507896 |
|
| 1546 |
+
| clip_fraction | 0.026 |
|
| 1547 |
+
| clip_range | 0.2 |
|
| 1548 |
+
| entropy_loss | -5.16 |
|
| 1549 |
+
| explained_variance | -2.38e-07 |
|
| 1550 |
+
| learning_rate | 0.0003 |
|
| 1551 |
+
| loss | 1.23e+06 |
|
| 1552 |
+
| n_updates | 700 |
|
| 1553 |
+
| policy_gradient_loss | -0.00263 |
|
| 1554 |
+
| std | 0.879 |
|
| 1555 |
+
| value_loss | 2.33e+06 |
|
| 1556 |
+
------------------------------------------
|
| 1557 |
+
------------------------------------------
|
| 1558 |
+
| rollout/ | |
|
| 1559 |
+
| ep_len_mean | 1e+03 |
|
| 1560 |
+
| ep_rew_mean | -7e+04 |
|
| 1561 |
+
| time/ | |
|
| 1562 |
+
| fps | 151 |
|
| 1563 |
+
| iterations | 72 |
|
| 1564 |
+
| time_elapsed | 974 |
|
| 1565 |
+
| total_timesteps | 147456 |
|
| 1566 |
+
| train/ | |
|
| 1567 |
+
| approx_kl | 0.0048819017 |
|
| 1568 |
+
| clip_fraction | 0.0321 |
|
| 1569 |
+
| clip_range | 0.2 |
|
| 1570 |
+
| entropy_loss | -5.16 |
|
| 1571 |
+
| explained_variance | -1.19e-07 |
|
| 1572 |
+
| learning_rate | 0.0003 |
|
| 1573 |
+
| loss | 1.66e+06 |
|
| 1574 |
+
| n_updates | 710 |
|
| 1575 |
+
| policy_gradient_loss | -0.00324 |
|
| 1576 |
+
| std | 0.877 |
|
| 1577 |
+
| value_loss | 3.4e+06 |
|
| 1578 |
+
------------------------------------------
|
| 1579 |
+
------------------------------------------
|
| 1580 |
+
| rollout/ | |
|
| 1581 |
+
| ep_len_mean | 1e+03 |
|
| 1582 |
+
| ep_rew_mean | -7.08e+04 |
|
| 1583 |
+
| time/ | |
|
| 1584 |
+
| fps | 150 |
|
| 1585 |
+
| iterations | 73 |
|
| 1586 |
+
| time_elapsed | 994 |
|
| 1587 |
+
| total_timesteps | 149504 |
|
| 1588 |
+
| train/ | |
|
| 1589 |
+
| approx_kl | 0.0051534167 |
|
| 1590 |
+
| clip_fraction | 0.0276 |
|
| 1591 |
+
| clip_range | 0.2 |
|
| 1592 |
+
| entropy_loss | -5.15 |
|
| 1593 |
+
| explained_variance | 5.96e-08 |
|
| 1594 |
+
| learning_rate | 0.0003 |
|
| 1595 |
+
| loss | 1.15e+06 |
|
| 1596 |
+
| n_updates | 720 |
|
| 1597 |
+
| policy_gradient_loss | -0.0034 |
|
| 1598 |
+
| std | 0.877 |
|
| 1599 |
+
| value_loss | 2.73e+06 |
|
| 1600 |
+
------------------------------------------
|
| 1601 |
+
------------------------------------------
|
| 1602 |
+
| rollout/ | |
|
| 1603 |
+
| ep_len_mean | 1e+03 |
|
| 1604 |
+
| ep_rew_mean | -7.17e+04 |
|
| 1605 |
+
| time/ | |
|
| 1606 |
+
| fps | 150 |
|
| 1607 |
+
| iterations | 74 |
|
| 1608 |
+
| time_elapsed | 1009 |
|
| 1609 |
+
| total_timesteps | 151552 |
|
| 1610 |
+
| train/ | |
|
| 1611 |
+
| approx_kl | 0.0039522136 |
|
| 1612 |
+
| clip_fraction | 0.0215 |
|
| 1613 |
+
| clip_range | 0.2 |
|
| 1614 |
+
| entropy_loss | -5.17 |
|
| 1615 |
+
| explained_variance | 5.96e-08 |
|
| 1616 |
+
| learning_rate | 0.0003 |
|
| 1617 |
+
| loss | 8.39e+05 |
|
| 1618 |
+
| n_updates | 730 |
|
| 1619 |
+
| policy_gradient_loss | -0.00278 |
|
| 1620 |
+
| std | 0.885 |
|
| 1621 |
+
| value_loss | 1.82e+06 |
|
| 1622 |
+
------------------------------------------
|
| 1623 |
+
------------------------------------------
|
| 1624 |
+
| rollout/ | |
|
| 1625 |
+
| ep_len_mean | 1e+03 |
|
| 1626 |
+
| ep_rew_mean | -7.25e+04 |
|
| 1627 |
+
| time/ | |
|
| 1628 |
+
| fps | 150 |
|
| 1629 |
+
| iterations | 75 |
|
| 1630 |
+
| time_elapsed | 1023 |
|
| 1631 |
+
| total_timesteps | 153600 |
|
| 1632 |
+
| train/ | |
|
| 1633 |
+
| approx_kl | 0.0037896927 |
|
| 1634 |
+
| clip_fraction | 0.0131 |
|
| 1635 |
+
| clip_range | 0.2 |
|
| 1636 |
+
| entropy_loss | -5.18 |
|
| 1637 |
+
| explained_variance | 1.19e-07 |
|
| 1638 |
+
| learning_rate | 0.0003 |
|
| 1639 |
+
| loss | 1.18e+06 |
|
| 1640 |
+
| n_updates | 740 |
|
| 1641 |
+
| policy_gradient_loss | -0.00183 |
|
| 1642 |
+
| std | 0.883 |
|
| 1643 |
+
| value_loss | 2.15e+06 |
|
| 1644 |
+
------------------------------------------
|
| 1645 |
+
-----------------------------------------
|
| 1646 |
+
| rollout/ | |
|
| 1647 |
+
| ep_len_mean | 1e+03 |
|
| 1648 |
+
| ep_rew_mean | -7.33e+04 |
|
| 1649 |
+
| time/ | |
|
| 1650 |
+
| fps | 149 |
|
| 1651 |
+
| iterations | 76 |
|
| 1652 |
+
| time_elapsed | 1038 |
|
| 1653 |
+
| total_timesteps | 155648 |
|
| 1654 |
+
| train/ | |
|
| 1655 |
+
| approx_kl | 0.005035511 |
|
| 1656 |
+
| clip_fraction | 0.0261 |
|
| 1657 |
+
| clip_range | 0.2 |
|
| 1658 |
+
| entropy_loss | -5.16 |
|
| 1659 |
+
| explained_variance | 0 |
|
| 1660 |
+
| learning_rate | 0.0003 |
|
| 1661 |
+
| loss | 6.91e+05 |
|
| 1662 |
+
| n_updates | 750 |
|
| 1663 |
+
| policy_gradient_loss | -0.00347 |
|
| 1664 |
+
| std | 0.877 |
|
| 1665 |
+
| value_loss | 1.49e+06 |
|
| 1666 |
+
-----------------------------------------
|
| 1667 |
+
-----------------------------------------
|
| 1668 |
+
| rollout/ | |
|
| 1669 |
+
| ep_len_mean | 1e+03 |
|
| 1670 |
+
| ep_rew_mean | -7.45e+04 |
|
| 1671 |
+
| time/ | |
|
| 1672 |
+
| fps | 149 |
|
| 1673 |
+
| iterations | 77 |
|
| 1674 |
+
| time_elapsed | 1052 |
|
| 1675 |
+
| total_timesteps | 157696 |
|
| 1676 |
+
| train/ | |
|
| 1677 |
+
| approx_kl | 0.005323178 |
|
| 1678 |
+
| clip_fraction | 0.0373 |
|
| 1679 |
+
| clip_range | 0.2 |
|
| 1680 |
+
| entropy_loss | -5.15 |
|
| 1681 |
+
| explained_variance | 0 |
|
| 1682 |
+
| learning_rate | 0.0003 |
|
| 1683 |
+
| loss | 8.95e+05 |
|
| 1684 |
+
| n_updates | 760 |
|
| 1685 |
+
| policy_gradient_loss | -0.00419 |
|
| 1686 |
+
| std | 0.876 |
|
| 1687 |
+
| value_loss | 1.9e+06 |
|
| 1688 |
+
-----------------------------------------
|
| 1689 |
+
-----------------------------------------
|
| 1690 |
+
| rollout/ | |
|
| 1691 |
+
| ep_len_mean | 1e+03 |
|
| 1692 |
+
| ep_rew_mean | -7.61e+04 |
|
| 1693 |
+
| time/ | |
|
| 1694 |
+
| fps | 149 |
|
| 1695 |
+
| iterations | 78 |
|
| 1696 |
+
| time_elapsed | 1071 |
|
| 1697 |
+
| total_timesteps | 159744 |
|
| 1698 |
+
| train/ | |
|
| 1699 |
+
| approx_kl | 0.005339088 |
|
| 1700 |
+
| clip_fraction | 0.0279 |
|
| 1701 |
+
| clip_range | 0.2 |
|
| 1702 |
+
| entropy_loss | -5.13 |
|
| 1703 |
+
| explained_variance | 0 |
|
| 1704 |
+
| learning_rate | 0.0003 |
|
| 1705 |
+
| loss | 1.56e+06 |
|
| 1706 |
+
| n_updates | 770 |
|
| 1707 |
+
| policy_gradient_loss | -0.00348 |
|
| 1708 |
+
| std | 0.871 |
|
| 1709 |
+
| value_loss | 3.12e+06 |
|
| 1710 |
+
-----------------------------------------
|
| 1711 |
+
------------------------------------------
|
| 1712 |
+
| rollout/ | |
|
| 1713 |
+
| ep_len_mean | 1e+03 |
|
| 1714 |
+
| ep_rew_mean | -7.72e+04 |
|
| 1715 |
+
| time/ | |
|
| 1716 |
+
| fps | 148 |
|
| 1717 |
+
| iterations | 79 |
|
| 1718 |
+
| time_elapsed | 1088 |
|
| 1719 |
+
| total_timesteps | 161792 |
|
| 1720 |
+
| train/ | |
|
| 1721 |
+
| approx_kl | 0.0021434259 |
|
| 1722 |
+
| clip_fraction | 0.00132 |
|
| 1723 |
+
| clip_range | 0.2 |
|
| 1724 |
+
| entropy_loss | -5.13 |
|
| 1725 |
+
| explained_variance | 0 |
|
| 1726 |
+
| learning_rate | 0.0003 |
|
| 1727 |
+
| loss | 1.39e+06 |
|
| 1728 |
+
| n_updates | 780 |
|
| 1729 |
+
| policy_gradient_loss | -0.000295 |
|
| 1730 |
+
| std | 0.874 |
|
| 1731 |
+
| value_loss | 3.14e+06 |
|
| 1732 |
+
------------------------------------------
|
| 1733 |
+
-----------------------------------------
|
| 1734 |
+
| rollout/ | |
|
| 1735 |
+
| ep_len_mean | 1e+03 |
|
| 1736 |
+
| ep_rew_mean | -7.88e+04 |
|
| 1737 |
+
| time/ | |
|
| 1738 |
+
| fps | 148 |
|
| 1739 |
+
| iterations | 80 |
|
| 1740 |
+
| time_elapsed | 1106 |
|
| 1741 |
+
| total_timesteps | 163840 |
|
| 1742 |
+
| train/ | |
|
| 1743 |
+
| approx_kl | 0.004515986 |
|
| 1744 |
+
| clip_fraction | 0.0185 |
|
| 1745 |
+
| clip_range | 0.2 |
|
| 1746 |
+
| entropy_loss | -5.13 |
|
| 1747 |
+
| explained_variance | -2.38e-07 |
|
| 1748 |
+
| learning_rate | 0.0003 |
|
| 1749 |
+
| loss | 1.26e+06 |
|
| 1750 |
+
| n_updates | 790 |
|
| 1751 |
+
| policy_gradient_loss | -0.00181 |
|
| 1752 |
+
| std | 0.872 |
|
| 1753 |
+
| value_loss | 2.6e+06 |
|
| 1754 |
+
-----------------------------------------
|
| 1755 |
+
-----------------------------------------
|
| 1756 |
+
| rollout/ | |
|
| 1757 |
+
| ep_len_mean | 1e+03 |
|
| 1758 |
+
| ep_rew_mean | -8.02e+04 |
|
| 1759 |
+
| time/ | |
|
| 1760 |
+
| fps | 147 |
|
| 1761 |
+
| iterations | 81 |
|
| 1762 |
+
| time_elapsed | 1125 |
|
| 1763 |
+
| total_timesteps | 165888 |
|
| 1764 |
+
| train/ | |
|
| 1765 |
+
| approx_kl | 0.004079287 |
|
| 1766 |
+
| clip_fraction | 0.0139 |
|
| 1767 |
+
| clip_range | 0.2 |
|
| 1768 |
+
| entropy_loss | -5.13 |
|
| 1769 |
+
| explained_variance | 5.96e-08 |
|
| 1770 |
+
| learning_rate | 0.0003 |
|
| 1771 |
+
| loss | 1.86e+06 |
|
| 1772 |
+
| n_updates | 800 |
|
| 1773 |
+
| policy_gradient_loss | -0.00189 |
|
| 1774 |
+
| std | 0.873 |
|
| 1775 |
+
| value_loss | 3.26e+06 |
|
| 1776 |
+
-----------------------------------------
|
| 1777 |
+
-----------------------------------------
|
| 1778 |
+
| rollout/ | |
|
| 1779 |
+
| ep_len_mean | 1e+03 |
|
| 1780 |
+
| ep_rew_mean | -8.2e+04 |
|
| 1781 |
+
| time/ | |
|
| 1782 |
+
| fps | 146 |
|
| 1783 |
+
| iterations | 82 |
|
| 1784 |
+
| time_elapsed | 1147 |
|
| 1785 |
+
| total_timesteps | 167936 |
|
| 1786 |
+
| train/ | |
|
| 1787 |
+
| approx_kl | 0.004386483 |
|
| 1788 |
+
| clip_fraction | 0.0127 |
|
| 1789 |
+
| clip_range | 0.2 |
|
| 1790 |
+
| entropy_loss | -5.13 |
|
| 1791 |
+
| explained_variance | 0 |
|
| 1792 |
+
| learning_rate | 0.0003 |
|
| 1793 |
+
| loss | 1.87e+06 |
|
| 1794 |
+
| n_updates | 810 |
|
| 1795 |
+
| policy_gradient_loss | -0.00205 |
|
| 1796 |
+
| std | 0.873 |
|
| 1797 |
+
| value_loss | 3.56e+06 |
|
| 1798 |
+
-----------------------------------------
|
| 1799 |
+
------------------------------------------
|
| 1800 |
+
| rollout/ | |
|
| 1801 |
+
| ep_len_mean | 1e+03 |
|
| 1802 |
+
| ep_rew_mean | -8.37e+04 |
|
| 1803 |
+
| time/ | |
|
| 1804 |
+
| fps | 145 |
|
| 1805 |
+
| iterations | 83 |
|
| 1806 |
+
| time_elapsed | 1167 |
|
| 1807 |
+
| total_timesteps | 169984 |
|
| 1808 |
+
| train/ | |
|
| 1809 |
+
| approx_kl | 0.0041227336 |
|
| 1810 |
+
| clip_fraction | 0.0245 |
|
| 1811 |
+
| clip_range | 0.2 |
|
| 1812 |
+
| entropy_loss | -5.14 |
|
| 1813 |
+
| explained_variance | 0 |
|
| 1814 |
+
| learning_rate | 0.0003 |
|
| 1815 |
+
| loss | 1.95e+06 |
|
| 1816 |
+
| n_updates | 820 |
|
| 1817 |
+
| policy_gradient_loss | -0.00324 |
|
| 1818 |
+
| std | 0.873 |
|
| 1819 |
+
| value_loss | 3.58e+06 |
|
| 1820 |
+
------------------------------------------
|
| 1821 |
+
----------------------------------------
|
| 1822 |
+
| rollout/ | |
|
| 1823 |
+
| ep_len_mean | 1e+03 |
|
| 1824 |
+
| ep_rew_mean | -8.62e+04 |
|
| 1825 |
+
| time/ | |
|
| 1826 |
+
| fps | 145 |
|
| 1827 |
+
| iterations | 84 |
|
| 1828 |
+
| time_elapsed | 1180 |
|
| 1829 |
+
| total_timesteps | 172032 |
|
| 1830 |
+
| train/ | |
|
| 1831 |
+
| approx_kl | 0.00430945 |
|
| 1832 |
+
| clip_fraction | 0.0171 |
|
| 1833 |
+
| clip_range | 0.2 |
|
| 1834 |
+
| entropy_loss | -5.13 |
|
| 1835 |
+
| explained_variance | 0 |
|
| 1836 |
+
| learning_rate | 0.0003 |
|
| 1837 |
+
| loss | 2.18e+06 |
|
| 1838 |
+
| n_updates | 830 |
|
| 1839 |
+
| policy_gradient_loss | -0.0021 |
|
| 1840 |
+
| std | 0.872 |
|
| 1841 |
+
| value_loss | 3.97e+06 |
|
| 1842 |
+
----------------------------------------
|
| 1843 |
+
------------------------------------------
|
| 1844 |
+
| rollout/ | |
|
| 1845 |
+
| ep_len_mean | 1e+03 |
|
| 1846 |
+
| ep_rew_mean | -8.81e+04 |
|
| 1847 |
+
| time/ | |
|
| 1848 |
+
| fps | 145 |
|
| 1849 |
+
| iterations | 85 |
|
| 1850 |
+
| time_elapsed | 1198 |
|
| 1851 |
+
| total_timesteps | 174080 |
|
| 1852 |
+
| train/ | |
|
| 1853 |
+
| approx_kl | 0.0027071913 |
|
| 1854 |
+
| clip_fraction | 0.0043 |
|
| 1855 |
+
| clip_range | 0.2 |
|
| 1856 |
+
| entropy_loss | -5.14 |
|
| 1857 |
+
| explained_variance | 0 |
|
| 1858 |
+
| learning_rate | 0.0003 |
|
| 1859 |
+
| loss | 1.6e+06 |
|
| 1860 |
+
| n_updates | 840 |
|
| 1861 |
+
| policy_gradient_loss | -0.000301 |
|
| 1862 |
+
| std | 0.876 |
|
| 1863 |
+
| value_loss | 3.94e+06 |
|
| 1864 |
+
------------------------------------------
|
| 1865 |
+
-----------------------------------------
|
| 1866 |
+
| rollout/ | |
|
| 1867 |
+
| ep_len_mean | 1e+03 |
|
| 1868 |
+
| ep_rew_mean | -8.97e+04 |
|
| 1869 |
+
| time/ | |
|
| 1870 |
+
| fps | 144 |
|
| 1871 |
+
| iterations | 86 |
|
| 1872 |
+
| time_elapsed | 1218 |
|
| 1873 |
+
| total_timesteps | 176128 |
|
| 1874 |
+
| train/ | |
|
| 1875 |
+
| approx_kl | 0.003243664 |
|
| 1876 |
+
| clip_fraction | 0.0133 |
|
| 1877 |
+
| clip_range | 0.2 |
|
| 1878 |
+
| entropy_loss | -5.15 |
|
| 1879 |
+
| explained_variance | 0 |
|
| 1880 |
+
| learning_rate | 0.0003 |
|
| 1881 |
+
| loss | 2.15e+06 |
|
| 1882 |
+
| n_updates | 850 |
|
| 1883 |
+
| policy_gradient_loss | -0.00204 |
|
| 1884 |
+
| std | 0.878 |
|
| 1885 |
+
| value_loss | 4.5e+06 |
|
| 1886 |
+
-----------------------------------------
|
| 1887 |
+
------------------------------------------
|
| 1888 |
+
| rollout/ | |
|
| 1889 |
+
| ep_len_mean | 1e+03 |
|
| 1890 |
+
| ep_rew_mean | -9.15e+04 |
|
| 1891 |
+
| time/ | |
|
| 1892 |
+
| fps | 143 |
|
| 1893 |
+
| iterations | 87 |
|
| 1894 |
+
| time_elapsed | 1239 |
|
| 1895 |
+
| total_timesteps | 178176 |
|
| 1896 |
+
| train/ | |
|
| 1897 |
+
| approx_kl | 0.0029459428 |
|
| 1898 |
+
| clip_fraction | 0.00308 |
|
| 1899 |
+
| clip_range | 0.2 |
|
| 1900 |
+
| entropy_loss | -5.16 |
|
| 1901 |
+
| explained_variance | 0 |
|
| 1902 |
+
| learning_rate | 0.0003 |
|
| 1903 |
+
| loss | 2.44e+06 |
|
| 1904 |
+
| n_updates | 860 |
|
| 1905 |
+
| policy_gradient_loss | -0.000751 |
|
| 1906 |
+
| std | 0.88 |
|
| 1907 |
+
| value_loss | 4.24e+06 |
|
| 1908 |
+
------------------------------------------
|
| 1909 |
+
------------------------------------------
|
| 1910 |
+
| rollout/ | |
|
| 1911 |
+
| ep_len_mean | 1e+03 |
|
| 1912 |
+
| ep_rew_mean | -9.36e+04 |
|
| 1913 |
+
| time/ | |
|
| 1914 |
+
| fps | 143 |
|
| 1915 |
+
| iterations | 88 |
|
| 1916 |
+
| time_elapsed | 1258 |
|
| 1917 |
+
| total_timesteps | 180224 |
|
| 1918 |
+
| train/ | |
|
| 1919 |
+
| approx_kl | 0.0043680565 |
|
| 1920 |
+
| clip_fraction | 0.0208 |
|
| 1921 |
+
| clip_range | 0.2 |
|
| 1922 |
+
| entropy_loss | -5.17 |
|
| 1923 |
+
| explained_variance | 0 |
|
| 1924 |
+
| learning_rate | 0.0003 |
|
| 1925 |
+
| loss | 2.09e+06 |
|
| 1926 |
+
| n_updates | 870 |
|
| 1927 |
+
| policy_gradient_loss | -0.00237 |
|
| 1928 |
+
| std | 0.881 |
|
| 1929 |
+
| value_loss | 4.39e+06 |
|
| 1930 |
+
------------------------------------------
|
| 1931 |
+
-----------------------------------------
|
| 1932 |
+
| rollout/ | |
|
| 1933 |
+
| ep_len_mean | 1e+03 |
|
| 1934 |
+
| ep_rew_mean | -9.52e+04 |
|
| 1935 |
+
| time/ | |
|
| 1936 |
+
| fps | 142 |
|
| 1937 |
+
| iterations | 89 |
|
| 1938 |
+
| time_elapsed | 1283 |
|
| 1939 |
+
| total_timesteps | 182272 |
|
| 1940 |
+
| train/ | |
|
| 1941 |
+
| approx_kl | 0.004063189 |
|
| 1942 |
+
| clip_fraction | 0.012 |
|
| 1943 |
+
| clip_range | 0.2 |
|
| 1944 |
+
| entropy_loss | -5.16 |
|
| 1945 |
+
| explained_variance | 0 |
|
| 1946 |
+
| learning_rate | 0.0003 |
|
| 1947 |
+
| loss | 2.77e+06 |
|
| 1948 |
+
| n_updates | 880 |
|
| 1949 |
+
| policy_gradient_loss | -0.00207 |
|
| 1950 |
+
| std | 0.877 |
|
| 1951 |
+
| value_loss | 4.73e+06 |
|
| 1952 |
+
-----------------------------------------
|
| 1953 |
+
------------------------------------------
|
| 1954 |
+
| rollout/ | |
|
| 1955 |
+
| ep_len_mean | 1e+03 |
|
| 1956 |
+
| ep_rew_mean | -9.69e+04 |
|
| 1957 |
+
| time/ | |
|
| 1958 |
+
| fps | 141 |
|
| 1959 |
+
| iterations | 90 |
|
| 1960 |
+
| time_elapsed | 1306 |
|
| 1961 |
+
| total_timesteps | 184320 |
|
| 1962 |
+
| train/ | |
|
| 1963 |
+
| approx_kl | 0.0049707354 |
|
| 1964 |
+
| clip_fraction | 0.0279 |
|
| 1965 |
+
| clip_range | 0.2 |
|
| 1966 |
+
| entropy_loss | -5.13 |
|
| 1967 |
+
| explained_variance | 0 |
|
| 1968 |
+
| learning_rate | 0.0003 |
|
| 1969 |
+
| loss | 1.65e+06 |
|
| 1970 |
+
| n_updates | 890 |
|
| 1971 |
+
| policy_gradient_loss | -0.00372 |
|
| 1972 |
+
| std | 0.871 |
|
| 1973 |
+
| value_loss | 3.68e+06 |
|
| 1974 |
+
------------------------------------------
|
train_log_full.txt
ADDED
|
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Setting up Training Environment...
|
| 2 |
+
Creating Liquid PPO Agent...
|
| 3 |
+
Using cpu device
|
| 4 |
+
Wrapping the env with a `Monitor` wrapper
|
| 5 |
+
Wrapping the env in a DummyVecEnv.
|
| 6 |
+
Starting Training (This may take a while)...
|
| 7 |
+
----------------------------------
|
| 8 |
+
| rollout/ | |
|
| 9 |
+
| ep_len_mean | 1e+03 |
|
| 10 |
+
| ep_rew_mean | -2.12e+04 |
|
| 11 |
+
| time/ | |
|
| 12 |
+
| fps | 464 |
|
| 13 |
+
| iterations | 1 |
|
| 14 |
+
| time_elapsed | 4 |
|
| 15 |
+
| total_timesteps | 2048 |
|
| 16 |
+
----------------------------------
|
| 17 |
+
Traceback (most recent call last):
|
| 18 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/train.py", line 35, in <module>
|
| 19 |
+
train()
|
| 20 |
+
~~~~~^^
|
| 21 |
+
File "/home/ylop/Documents/drone go brr/Drone-go-brrrrr/Drone-go-brrrrr/train.py", line 28, in train
|
| 22 |
+
model.learn(total_timesteps=total_timesteps, callback=checkpoint_callback)
|
| 23 |
+
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 24 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/ppo/ppo.py", line 311, in learn
|
| 25 |
+
return super().learn(
|
| 26 |
+
~~~~~~~~~~~~~^
|
| 27 |
+
total_timesteps=total_timesteps,
|
| 28 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 29 |
+
...<4 lines>...
|
| 30 |
+
progress_bar=progress_bar,
|
| 31 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 32 |
+
)
|
| 33 |
+
^
|
| 34 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 337, in learn
|
| 35 |
+
self.train()
|
| 36 |
+
~~~~~~~~~~^^
|
| 37 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/stable_baselines3/ppo/ppo.py", line 275, in train
|
| 38 |
+
loss.backward()
|
| 39 |
+
~~~~~~~~~~~~~^^
|
| 40 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/_tensor.py", line 625, in backward
|
| 41 |
+
torch.autograd.backward(
|
| 42 |
+
~~~~~~~~~~~~~~~~~~~~~~~^
|
| 43 |
+
self, gradient, retain_graph, create_graph, inputs=inputs
|
| 44 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 45 |
+
)
|
| 46 |
+
^
|
| 47 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/autograd/__init__.py", line 354, in backward
|
| 48 |
+
_engine_run_backward(
|
| 49 |
+
~~~~~~~~~~~~~~~~~~~~^
|
| 50 |
+
tensors,
|
| 51 |
+
^^^^^^^^
|
| 52 |
+
...<5 lines>...
|
| 53 |
+
accumulate_grad=True,
|
| 54 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 55 |
+
)
|
| 56 |
+
^
|
| 57 |
+
File "/home/ylop/.local/lib/python3.14/site-packages/torch/autograd/graph.py", line 841, in _engine_run_backward
|
| 58 |
+
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
|
| 59 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 60 |
+
t_outputs, *args, **kwargs
|
| 61 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 62 |
+
) # Calls into the C++ engine to run the backward pass
|
| 63 |
+
^
|
| 64 |
+
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
|