FAVL commited on
Commit
f8afc81
·
verified ·
1 Parent(s): e38453e

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .dockerignore +160 -0
  2. .gitattributes +23 -58
  3. .github/ISSUE_TEMPLATE/bug-report.yml +94 -0
  4. .github/PULL_REQUEST_TEMPLATE.md +55 -0
  5. .github/labeler.yml +69 -0
  6. .github/workflows/documentation-upload-pr.yml +41 -0
  7. .github/workflows/documentation.yml +86 -0
  8. .github/workflows/fast_tests.yml +93 -0
  9. .github/workflows/full_tests.yml +222 -0
  10. .github/workflows/issue_labeler.yml +77 -0
  11. .github/workflows/nightly.yml +196 -0
  12. .github/workflows/pr_labeler.yml +39 -0
  13. .github/workflows/quality.yml +58 -0
  14. .github/workflows/release.yml +179 -0
  15. .github/workflows/security.yml +54 -0
  16. .github/workflows/stale.yml +71 -0
  17. .github/workflows/unbound_deps_tests.yml +196 -0
  18. .gitignore +179 -0
  19. .pre-commit-config.yaml +108 -0
  20. CODE_OF_CONDUCT.md +132 -0
  21. CONTRIBUTING.md +83 -0
  22. LICENSE +507 -0
  23. MANIFEST.in +2 -0
  24. Makefile +180 -0
  25. README.md +159 -0
  26. SECURITY.md +48 -0
  27. benchmarks/video/README.md +288 -0
  28. benchmarks/video/run_video_benchmark.py +488 -0
  29. docker/Dockerfile.internal +93 -0
  30. docker/Dockerfile.user +79 -0
  31. docs-requirements.txt +3 -0
  32. docs/README.md +139 -0
  33. docs/source/_toctree.yml +132 -0
  34. docs/source/act.mdx +92 -0
  35. docs/source/async.mdx +313 -0
  36. docs/source/backwardcomp.mdx +151 -0
  37. docs/source/bring_your_own_policies.mdx +175 -0
  38. docs/source/cameras.mdx +220 -0
  39. docs/source/contributing.md +83 -0
  40. docs/source/damiao.mdx +165 -0
  41. docs/source/dataset_subtask.mdx +278 -0
  42. docs/source/debug_processor_pipeline.mdx +299 -0
  43. docs/source/earthrover_mini_plus.mdx +231 -0
  44. docs/source/env_processor.mdx +418 -0
  45. docs/source/envhub.mdx +431 -0
  46. docs/source/envhub_isaaclab_arena.mdx +510 -0
  47. docs/source/envhub_leisaac.mdx +302 -0
  48. docs/source/feetech.mdx +71 -0
  49. docs/source/groot.mdx +131 -0
  50. docs/source/hilserl.mdx +923 -0
.dockerignore ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Misc
16
+ .git
17
+ tmp
18
+ wandb
19
+ data
20
+ outputs
21
+ .vscode
22
+ rl
23
+ media
24
+
25
+
26
+ # Logging
27
+ logs
28
+
29
+ # HPC
30
+ nautilus/*.yaml
31
+ *.key
32
+
33
+ # Slurm
34
+ sbatch*.sh
35
+
36
+ # Byte-compiled / optimized / DLL files
37
+ __pycache__/
38
+ *.py[cod]
39
+ *$py.class
40
+
41
+ # C extensions
42
+ *.so
43
+
44
+ # Distribution / packaging
45
+ .Python
46
+ build/
47
+ develop-eggs/
48
+ dist/
49
+ downloads/
50
+ eggs/
51
+ .eggs/
52
+ lib/
53
+ lib64/
54
+ parts/
55
+ sdist/
56
+ var/
57
+ wheels/
58
+ pip-wheel-metadata/
59
+ share/python-wheels/
60
+ *.egg-info/
61
+ .installed.cfg
62
+ *.egg
63
+ MANIFEST
64
+
65
+ # PyInstaller
66
+ # Usually these files are written by a python script from a template
67
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
68
+ *.manifest
69
+ *.spec
70
+
71
+ # Installer logs
72
+ pip-log.txt
73
+ pip-delete-this-directory.txt
74
+
75
+ # Unit test / coverage reports
76
+ !tests/artifacts
77
+ htmlcov/
78
+ .tox/
79
+ .nox/
80
+ .coverage
81
+ .coverage.*
82
+ nosetests.xml
83
+ coverage.xml
84
+ *.cover
85
+ *.py,cover
86
+ .hypothesis/
87
+ .pytest_cache/
88
+
89
+ # Ignore .cache except calibration
90
+ .cache/*
91
+ !.cache/calibration/
92
+ !.cache/calibration/**
93
+
94
+ # Translations
95
+ *.mo
96
+ *.pot
97
+
98
+ # Django stuff:
99
+ *.log
100
+ local_settings.py
101
+ db.sqlite3
102
+ db.sqlite3-journal
103
+
104
+ # Flask stuff:
105
+ instance/
106
+ .webassets-cache
107
+
108
+ # Scrapy stuff:
109
+ .scrapy
110
+
111
+ # Sphinx documentation
112
+ docs/_build/
113
+
114
+ # PyBuilder
115
+ target/
116
+
117
+ # Jupyter Notebook
118
+ .ipynb_checkpoints
119
+
120
+ # IPython
121
+ profile_default/
122
+ ipython_config.py
123
+
124
+ # pyenv
125
+ .python-version
126
+
127
+ # pipenv
128
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
129
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
130
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
131
+ # install all needed dependencies.
132
+ #Pipfile.lock
133
+
134
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
135
+ __pypackages__/
136
+
137
+ # Celery stuff
138
+ celerybeat-schedule
139
+ celerybeat.pid
140
+
141
+ # SageMath parsed files
142
+ *.sage.py
143
+
144
+ # Spyder project settings
145
+ .spyderproject
146
+ .spyproject
147
+
148
+ # Rope project settings
149
+ .ropeproject
150
+
151
+ # mkdocs documentation
152
+ /site
153
+
154
+ # mypy
155
+ .mypy_cache/
156
+ .dmypy.json
157
+ dmypy.json
158
+
159
+ # Pyre type checker
160
+ .pyre/
.gitattributes CHANGED
@@ -1,60 +1,25 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.avro filter=lfs diff=lfs merge=lfs -text
4
- *.bin filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ckpt filter=lfs diff=lfs merge=lfs -text
7
- *.ftz filter=lfs diff=lfs merge=lfs -text
8
- *.gz filter=lfs diff=lfs merge=lfs -text
9
- *.h5 filter=lfs diff=lfs merge=lfs -text
10
- *.joblib filter=lfs diff=lfs merge=lfs -text
11
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
12
- *.lz4 filter=lfs diff=lfs merge=lfs -text
13
- *.mds filter=lfs diff=lfs merge=lfs -text
14
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
15
- *.model filter=lfs diff=lfs merge=lfs -text
16
- *.msgpack filter=lfs diff=lfs merge=lfs -text
17
- *.npy filter=lfs diff=lfs merge=lfs -text
18
- *.npz filter=lfs diff=lfs merge=lfs -text
19
- *.onnx filter=lfs diff=lfs merge=lfs -text
20
- *.ot filter=lfs diff=lfs merge=lfs -text
21
- *.parquet filter=lfs diff=lfs merge=lfs -text
22
- *.pb filter=lfs diff=lfs merge=lfs -text
23
- *.pickle filter=lfs diff=lfs merge=lfs -text
24
- *.pkl filter=lfs diff=lfs merge=lfs -text
25
- *.pt filter=lfs diff=lfs merge=lfs -text
26
- *.pth filter=lfs diff=lfs merge=lfs -text
27
- *.rar filter=lfs diff=lfs merge=lfs -text
28
  *.safetensors filter=lfs diff=lfs merge=lfs -text
29
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
30
- *.tar.* filter=lfs diff=lfs merge=lfs -text
31
- *.tar filter=lfs diff=lfs merge=lfs -text
32
- *.tflite filter=lfs diff=lfs merge=lfs -text
33
- *.tgz filter=lfs diff=lfs merge=lfs -text
34
- *.wasm filter=lfs diff=lfs merge=lfs -text
35
- *.xz filter=lfs diff=lfs merge=lfs -text
36
- *.zip filter=lfs diff=lfs merge=lfs -text
37
- *.zst filter=lfs diff=lfs merge=lfs -text
38
- *tfevents* filter=lfs diff=lfs merge=lfs -text
39
- # Audio files - uncompressed
40
- *.pcm filter=lfs diff=lfs merge=lfs -text
41
- *.sam filter=lfs diff=lfs merge=lfs -text
42
- *.raw filter=lfs diff=lfs merge=lfs -text
43
- # Audio files - compressed
44
- *.aac filter=lfs diff=lfs merge=lfs -text
45
- *.flac filter=lfs diff=lfs merge=lfs -text
46
- *.mp3 filter=lfs diff=lfs merge=lfs -text
47
- *.ogg filter=lfs diff=lfs merge=lfs -text
48
- *.wav filter=lfs diff=lfs merge=lfs -text
49
- # Image files - uncompressed
50
- *.bmp filter=lfs diff=lfs merge=lfs -text
51
- *.gif filter=lfs diff=lfs merge=lfs -text
52
- *.png filter=lfs diff=lfs merge=lfs -text
53
- *.tiff filter=lfs diff=lfs merge=lfs -text
54
- # Image files - compressed
55
- *.jpg filter=lfs diff=lfs merge=lfs -text
56
- *.jpeg filter=lfs diff=lfs merge=lfs -text
57
- *.webp filter=lfs diff=lfs merge=lfs -text
58
- # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
- *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ *.memmap filter=lfs diff=lfs merge=lfs -text
15
+ *.stl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
16
  *.safetensors filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  *.mp4 filter=lfs diff=lfs merge=lfs -text
18
+ *.arrow filter=lfs diff=lfs merge=lfs -text
19
+ *.json !text !filter !merge !diff
20
+ tests/artifacts/cameras/*.png filter=lfs diff=lfs merge=lfs -text
21
+ *.bag filter=lfs diff=lfs merge=lfs -text
22
+ media/readme/VLA_architecture.jpg filter=lfs diff=lfs merge=lfs -text
23
+ media/readme/lerobot-logo-thumbnail.png filter=lfs diff=lfs merge=lfs -text
24
+ media/readme/robots_control_video.webp filter=lfs diff=lfs merge=lfs -text
25
+ media/readme/so100_video.webp filter=lfs diff=lfs merge=lfs -text
.github/ISSUE_TEMPLATE/bug-report.yml ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ name: "🚀 Issue / Bug / Request"
16
+ description: Report a bug, suggest an improvement, or ask a technical question.
17
+ body:
18
+ - type: markdown
19
+ attributes:
20
+ value: |
21
+ ### Thanks for contributing to LeRobot! 🙌
22
+ Please choose the most relevant sections below. If this is a general "how-to" question, consider our [Discord](https://discord.gg/s3KuuzsPFb) for faster community support.
23
+
24
+ - type: dropdown
25
+ id: issue-type
26
+ attributes:
27
+ label: Ticket Type
28
+ description: What kind of ticket are you opening?
29
+ options:
30
+ - "🐛 Bug Report (Something isn't working)"
31
+ - "💡 Feature Request / Improvement"
32
+ - "❓ Technical Question"
33
+ - "🧹 Maintenance / Documentation"
34
+ validations:
35
+ required: true
36
+
37
+ - type: textarea
38
+ id: system-info
39
+ attributes:
40
+ label: Environment & System Info
41
+ description: |
42
+ For bugs or technical questions, please run `lerobot-info` and paste the output.
43
+ (Optional for feature requests).
44
+ render: Shell
45
+ placeholder: lerobot version, OS, python version, etc.
46
+
47
+ - type: textarea
48
+ id: description
49
+ validations:
50
+ required: true
51
+ attributes:
52
+ label: Description
53
+ description: |
54
+ Provide a clear summary of the issue or your proposal.
55
+ - **Bugs:** What is happening?
56
+ - **Features:** What is the goal/use case?
57
+ - **Questions:** What are you trying to achieve?
58
+ placeholder: |
59
+ A clear and concise description of the issue or suggestion.
60
+
61
+ - type: textarea
62
+ id: context-repro
63
+ attributes:
64
+ label: Context & Reproduction
65
+ description: |
66
+ Provide a code snippet, steps to reproduce a bug, or technical details about your proposal.
67
+ Please use code blocks for scripts and CLI commands.
68
+ placeholder: |
69
+ Steps to reproduce / Usage example:
70
+ 1.
71
+ 2.
72
+ 3.
73
+
74
+ - type: textarea
75
+ id: logs
76
+ attributes:
77
+ label: Relevant logs or stack trace
78
+ description: If applicable, paste relevant error logs here.
79
+ render: Shell
80
+
81
+ - type: checkboxes
82
+ id: extras
83
+ attributes:
84
+ label: Checklist
85
+ options:
86
+ - label: I have searched existing tickets to ensure this isn't a duplicate.
87
+ - label: I am using the latest version of the `main` branch.
88
+ - label: I have verified this is not an environment-specific problem.
89
+
90
+ - type: textarea
91
+ id: workaround
92
+ attributes:
93
+ label: Additional Info / Workarounds
94
+ description: Anything else we should know? If you have a workaround, please share it!
.github/PULL_REQUEST_TEMPLATE.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Title
2
+
3
+ Short, imperative summary (e.g., "fix(robots): handle None in sensor parser"). See [CONTRIBUTING.md](../CONTRIBUTING.md) for PR conventions.
4
+
5
+ ## Type / Scope
6
+
7
+ - **Type**: (Bug | Feature | Docs | Performance | Test | CI | Chore)
8
+ - **Scope**: (optional — name of module or package affected)
9
+
10
+ ## Summary / Motivation
11
+
12
+ - One-paragraph description of what changes and why.
13
+ - Why this change is needed and any trade-offs or design notes.
14
+
15
+ ## Related issues
16
+
17
+ - Fixes / Closes: # (if any)
18
+ - Related: # (if any)
19
+
20
+ ## What changed
21
+
22
+ - Short, concrete bullets of the modifications (files/behaviour).
23
+ - Short note if this introduces breaking changes and migration steps.
24
+
25
+ ## How was this tested (or how to run locally)
26
+
27
+ - Tests added: list new tests or test files.
28
+ - Manual checks / dataset runs performed.
29
+ - Instructions for the reviewer
30
+
31
+ Example:
32
+
33
+ - Ran the relevant tests:
34
+
35
+ ```bash
36
+ pytest -q tests/ -k <keyword>
37
+ ```
38
+
39
+ - Reproduce with a quick example or CLI (if applicable):
40
+
41
+ ```bash
42
+ lerobot-train --some.option=true
43
+ ```
44
+
45
+ ## Checklist (required before merge)
46
+
47
+ - [ ] Linting/formatting run (`pre-commit run -a`)
48
+ - [ ] All tests pass locally (`pytest`)
49
+ - [ ] Documentation updated
50
+ - [ ] CI is green
51
+
52
+ ## Reviewer notes
53
+
54
+ - Anything the reviewer should focus on (performance, edge-cases, specific files) or general notes.
55
+ - Anyone in the community is free to review the PR.
.github/labeler.yml ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ CI:
16
+ - changed-files:
17
+ - any-glob-to-any-file:
18
+ - '.github/**'
19
+ - 'docker/**'
20
+
21
+ github_actions:
22
+ - changed-files:
23
+ - any-glob-to-any-file: '.github/**'
24
+
25
+ documentation:
26
+ - changed-files:
27
+ - any-glob-to-any-file:
28
+ - '**/*.md'
29
+ - '**/*.mdx'
30
+ - 'docs/**'
31
+
32
+ examples:
33
+ - changed-files:
34
+ - any-glob-to-any-file: 'examples/**'
35
+
36
+ tests:
37
+ - changed-files:
38
+ - any-glob-to-any-file: 'tests/**'
39
+
40
+ sensors:
41
+ - changed-files:
42
+ - any-glob-to-any-file: 'src/lerobot/cameras/**'
43
+
44
+ configuration:
45
+ - changed-files:
46
+ - any-glob-to-any-file: 'src/lerobot/configs/**'
47
+
48
+ dataset:
49
+ - changed-files:
50
+ - any-glob-to-any-file: 'src/lerobot/datasets/**'
51
+
52
+ evaluation:
53
+ - changed-files:
54
+ - any-glob-to-any-file: 'src/lerobot/envs/**'
55
+
56
+ robots:
57
+ - changed-files:
58
+ - any-glob-to-any-file:
59
+ - 'src/lerobot/teleoperators/**'
60
+ - 'src/lerobot/robots/**'
61
+ - 'src/lerobot/motors/**'
62
+
63
+ policies:
64
+ - changed-files:
65
+ - any-glob-to-any-file: 'src/lerobot/policies/**'
66
+
67
+ processor:
68
+ - changed-files:
69
+ - any-glob-to-any-file: 'src/lerobot/processor/**'
.github/workflows/documentation-upload-pr.yml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow uploads the documentation preview built for a PR and comments the link on the PR.
16
+ name: Documentation PR Upload
17
+ permissions:
18
+ contents: read
19
+ pull-requests: write
20
+
21
+ on:
22
+ # Triggered by the completion of the main 'Documentation' workflow.
23
+ workflow_run: # zizmor: ignore[dangerous-triggers] We follow the same pattern as in Transformers
24
+ workflows: ["Documentation"]
25
+ types:
26
+ - completed
27
+
28
+ jobs:
29
+ # This job uploads a preview of the documentation for a pull request.
30
+ upload_and_comment:
31
+ name: Upload Preview and Comment
32
+ if: >
33
+ github.event.workflow_run.event == 'pull_request' &&
34
+ github.event.workflow_run.conclusion == 'success' &&
35
+ github.repository == 'huggingface/lerobot'
36
+ uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
37
+ with:
38
+ package_name: lerobot
39
+ secrets:
40
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
41
+ comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}
.github/workflows/documentation.yml ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles building documentation for both main branches and PRs.
16
+ name: Documentation
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+ inputs:
22
+ version:
23
+ description: 'Version tag (e.g. v0.1.2) - Leave empty for standard main build'
24
+ required: false
25
+ type: string
26
+
27
+ # Triggers the workflow on push events to main for the docs folder
28
+ push:
29
+ branches:
30
+ - main
31
+ paths:
32
+ - "docs/**"
33
+
34
+ # Triggers the workflow on pull request events targeting main for the docs folder
35
+ pull_request:
36
+ branches:
37
+ - main
38
+ paths:
39
+ - "docs/**"
40
+
41
+ release:
42
+ types: [published]
43
+
44
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
45
+ concurrency:
46
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
47
+ cancel-in-progress: true
48
+
49
+ jobs:
50
+ # This job builds and deploys the official documentation.
51
+ build_main_docs:
52
+ name: Build Main Docs
53
+ if: >
54
+ (github.event_name == 'push' || github.event_name == 'workflow_dispatch' || github.event_name == 'release') &&
55
+ github.repository == 'huggingface/lerobot'
56
+ permissions:
57
+ contents: read
58
+ uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
59
+ with:
60
+ commit_sha: ${{ github.sha }}
61
+ package: lerobot
62
+ additional_args: >-
63
+ --not_python_module
64
+ ${{
65
+ (github.event_name == 'release' && format('--version {0}', github.event.release.tag_name)) ||
66
+ (inputs.version != '' && format('--version {0}', inputs.version)) ||
67
+ ''
68
+ }}
69
+ secrets:
70
+ token: ${{ secrets.HUGGINGFACE_PUSH }}
71
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
72
+
73
+ # This job builds a preview of the documentation for a pull request.
74
+ # The result of this job triggers the 'Upload PR Documentation' workflow.
75
+ build_pr_docs:
76
+ name: Build PR Docs
77
+ if: github.event_name == 'pull_request' && github.repository == 'huggingface/lerobot'
78
+ permissions:
79
+ contents: read
80
+ pull-requests: write
81
+ uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
82
+ with:
83
+ commit_sha: ${{ github.event.pull_request.head.sha }}
84
+ pr_number: ${{ github.event.number }}
85
+ package: lerobot
86
+ additional_args: --not_python_module
.github/workflows/fast_tests.yml ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles fast testing.
16
+ name: Fast Tests
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+
22
+ pull_request:
23
+ branches:
24
+ - main
25
+ paths:
26
+ - "src/**"
27
+ - "tests/**"
28
+ - ".github/workflows/**"
29
+ - "pyproject.toml"
30
+ - "Makefile"
31
+ push:
32
+ branches:
33
+ - main
34
+ paths:
35
+ - "src/**"
36
+ - "tests/**"
37
+ - ".github/workflows/**"
38
+ - "pyproject.toml"
39
+ - "Makefile"
40
+
41
+ permissions:
42
+ contents: read
43
+
44
+ # Sets up the environment variables
45
+ env:
46
+ UV_VERSION: "0.8.0"
47
+ PYTHON_VERSION: "3.10"
48
+
49
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
50
+ concurrency:
51
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
52
+ cancel-in-progress: true
53
+
54
+ jobs:
55
+ # This job runs pytests with the default dependencies.
56
+ # It runs everytime we commit to a PR or push to main
57
+ fast-pytest-tests:
58
+ name: Fast Pytest Tests
59
+ runs-on: ubuntu-latest
60
+ env:
61
+ MUJOCO_GL: egl
62
+ HF_HOME: /mnt/cache/.cache/huggingface
63
+ HF_LEROBOT_HOME: /mnt/cache/.cache/huggingface/lerobot
64
+ steps:
65
+ - uses: actions/checkout@v6
66
+ with:
67
+ persist-credentials: false
68
+ lfs: true
69
+
70
+ # NOTE(Steven): Mount to `/mnt` to avoid the limited storage on `/home`. Consider cleaning default SDKs or using self-hosted runners for more space.
71
+ # (As of 2024-06-10, the runner's `/home` has only 6.2 GB free—8% of its 72 GB total.)
72
+ - name: Setup /mnt storage
73
+ run: sudo chown -R $USER:$USER /mnt
74
+
75
+ # TODO(Steven): Evaluate the need of these dependencies
76
+ - name: Install apt dependencies
77
+ run: |
78
+ sudo apt-get update && sudo apt-get install -y build-essential git \
79
+ curl libglib2.0-0 libegl1-mesa-dev ffmpeg \
80
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev
81
+
82
+ - name: Setup uv and Python
83
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
84
+ with:
85
+ enable-cache: true
86
+ version: ${{ env.UV_VERSION }}
87
+ python-version: ${{ env.PYTHON_VERSION }}
88
+
89
+ - name: Install lerobot with test extras
90
+ run: uv sync --extra "test"
91
+
92
+ - name: Run pytest
93
+ run: uv run pytest tests -vv --maxfail=10
.github/workflows/full_tests.yml ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles full testing.
16
+ name: Full Tests
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+
22
+ pull_request_review:
23
+ types: [submitted]
24
+ push:
25
+ branches:
26
+ - main
27
+ paths:
28
+ - "src/**"
29
+ - "tests/**"
30
+ - ".github/workflows/**"
31
+ - "pyproject.toml"
32
+ - "Makefile"
33
+
34
+ permissions:
35
+ contents: read
36
+
37
+ # Sets up the environment variables
38
+ env:
39
+ UV_VERSION: "0.8.0"
40
+ PYTHON_VERSION: "3.10"
41
+ DOCKER_IMAGE_NAME: huggingface/lerobot-gpu
42
+
43
+ # Ensures that only the latest action is built, canceling older runs.
44
+ concurrency:
45
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
46
+ cancel-in-progress: true
47
+
48
+ jobs:
49
+
50
+ # This job runs the E2E tests + pytest with all extras
51
+ # It runs everytime a PR is approved or a push to main
52
+ full-tests:
53
+ name: Full Tests
54
+ runs-on: ubuntu-latest
55
+ if: |
56
+ (github.event_name == 'pull_request_review' && github.event.review.state == 'approved') ||
57
+ github.event_name == 'push' ||
58
+ github.event_name == 'workflow_dispatch'
59
+ env:
60
+ MUJOCO_GL: egl
61
+ HF_HOME: /mnt/cache/.cache/huggingface
62
+ HF_LEROBOT_HOME: /mnt/cache/.cache/huggingface/lerobot
63
+ steps:
64
+ - uses: actions/checkout@v6
65
+ with:
66
+ lfs: true
67
+ persist-credentials: false
68
+
69
+ # NOTE(Steven): Mount to `/mnt` to avoid the limited storage on `/home`. Consider cleaning default SDKs or using self-hosted runners for more space.
70
+ # (As of 2024-06-10, the runner's `/home` has only 6.2 GB free—8% of its 72 GB total.)
71
+ - name: Setup /mnt storage
72
+ run: sudo chown -R $USER:$USER /mnt
73
+
74
+ - name: Install apt dependencies
75
+ run: |
76
+ sudo apt-get update && sudo apt-get install -y build-essential \
77
+ git curl libglib2.0-0 libegl1-mesa-dev ffmpeg libusb-1.0-0-dev \
78
+ speech-dispatcher libgeos-dev portaudio19-dev
79
+
80
+ - name: Setup uv and Python
81
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
82
+ with:
83
+ enable-cache: true
84
+ version: ${{ env.UV_VERSION }}
85
+ python-version: ${{ env.PYTHON_VERSION }}
86
+
87
+ - name: Install lerobot with all extras
88
+ run: uv sync --extra all # TODO(Steven): Make flash-attn optional
89
+
90
+ - name: Run pytest (all extras)
91
+ run: uv run pytest tests -vv --maxfail=10
92
+
93
+ - name: Run end-to-end tests
94
+ run: uv run make test-end-to-end
95
+
96
+ # This job builds a GPU enabled image for testing
97
+ # It runs everytime a PR is approved or a push to main
98
+ # TODO(Steven): For now we skip this job for community PRs
99
+ build-and-push-docker:
100
+ name: Build and Push Docker
101
+ runs-on:
102
+ group: aws-general-8-plus
103
+ if: |
104
+ github.repository == 'huggingface/lerobot' && (
105
+ (github.event_name == 'pull_request_review' && github.event.review.state == 'approved' && github.event.pull_request.head.repo.fork == false) ||
106
+ github.event_name == 'push' ||
107
+ github.event_name == 'workflow_dispatch'
108
+ )
109
+ outputs:
110
+ image_tag: ${{ steps.set_tag.outputs.image_tag }}
111
+ env:
112
+ GITHUB_EVENT_NAME: ${{ github.event_name }}
113
+ GITHUB_REF: ${{ github.ref }}
114
+ GITHUB_PR_NUMBER: ${{ github.event.pull_request.number }}
115
+ steps:
116
+ - name: Set Docker image tag
117
+ id: set_tag
118
+ run: |
119
+ if [[ "${GITHUB_EVENT_NAME}" == "push" ]]; then
120
+ TAG="${DOCKER_IMAGE_NAME}:latest"
121
+ elif [[ -n "${GITHUB_PR_NUMBER}" ]]; then
122
+ TAG="${DOCKER_IMAGE_NAME}:pr-${GITHUB_PR_NUMBER}"
123
+ else
124
+ TAG="${DOCKER_IMAGE_NAME}:pr-${GITHUB_REF##*/}"
125
+ fi
126
+ echo "image_tag=$TAG" >> $GITHUB_OUTPUT
127
+ - name: Install Git LFS
128
+ run: |
129
+ sudo apt-get update
130
+ sudo apt-get install git-lfs
131
+ git lfs install
132
+ - uses: actions/checkout@v6
133
+ with:
134
+ lfs: true
135
+ persist-credentials: false
136
+ - name: Set up Docker Buildx
137
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
138
+ with:
139
+ cache-binary: false
140
+ - name: Login to Docker Hub
141
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
142
+ with:
143
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
144
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
145
+ - name: Build and push Docker image
146
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
147
+ with:
148
+ context: .
149
+ file: ./docker/Dockerfile.internal
150
+ push: true
151
+ tags: ${{ steps.set_tag.outputs.image_tag }}
152
+
153
+ # This job runs pytest with all extras in a GPU enabled host
154
+ # It runs everytime a test image is created
155
+ gpu-tests:
156
+ name: GPU Tests
157
+ needs: [build-and-push-docker]
158
+ runs-on:
159
+ group: aws-g6-4xlarge-plus
160
+ env:
161
+ HF_HOME: /home/user_lerobot/.cache/huggingface
162
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
163
+ TORCH_HOME: /home/user_lerobot/.cache/torch
164
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
165
+ container:
166
+ image: ${{ needs.build-and-push-docker.outputs.image_tag }} # zizmor: ignore[unpinned-images]
167
+ options: --gpus all --shm-size "16gb"
168
+ credentials:
169
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
170
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
171
+ defaults:
172
+ run:
173
+ shell: bash
174
+ working-directory: /lerobot
175
+ steps:
176
+ - name: Run pytest on GPU
177
+ run: pytest tests -vv --maxfail=10
178
+ - name: Run end-to-end tests
179
+ run: make test-end-to-end
180
+
181
+ # This job deletes the test image recently created
182
+ # It runs everytime after the gpu-tests have finished
183
+ delete-pr-image:
184
+ name: Delete PR Image
185
+ needs: [gpu-tests, build-and-push-docker]
186
+ if: always() && ((github.event.review.state == 'approved') || (github.event_name == 'workflow_dispatch')) && needs.build-and-push-docker.result == 'success'
187
+ runs-on: ubuntu-latest
188
+ steps:
189
+ - name: Get Docker Hub Token and Delete Image
190
+ # zizmor: ignore[template-injection]
191
+ env:
192
+ DOCKERHUB_LEROBOT_USERNAME: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
193
+ DOCKERHUB_LEROBOT_PASSWORD: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
194
+ IMAGE_FULL: ${{ needs.build-and-push-docker.outputs.image_tag }}
195
+ run: |
196
+ IMAGE_NAME=$(echo "$IMAGE_FULL" | cut -d':' -f1)
197
+ IMAGE_TAG=$(echo "$IMAGE_FULL" | cut -d':' -f2-)
198
+ echo "Attempting to delete image: $IMAGE_NAME:$IMAGE_TAG"
199
+
200
+ TOKEN=$(curl -s -H "Content-Type: application/json" \
201
+ -X POST \
202
+ -d "{\"username\": \"$DOCKERHUB_LEROBOT_USERNAME\", \"password\": \"$DOCKERHUB_LEROBOT_PASSWORD\"}" \
203
+ https://hub.docker.com/v2/users/login/ | jq -r .token)
204
+
205
+ if [ "$TOKEN" == "null" ] || [ -z "$TOKEN" ]; then
206
+ echo "::error::Failed to get Docker Hub token."
207
+ exit 1
208
+ fi
209
+
210
+ HTTP_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
211
+ -H "Authorization: JWT ${TOKEN}" \
212
+ -X DELETE \
213
+ https://hub.docker.com/v2/repositories/${IMAGE_NAME}/tags/$IMAGE_TAG)
214
+
215
+ if [ "$HTTP_RESPONSE" -eq 204 ]; then
216
+ echo "Successfully deleted Docker image tag: $IMAGE_NAME:$IMAGE_TAG"
217
+ else
218
+ echo "::error::Failed to delete Docker image. HTTP status: $HTTP_RESPONSE"
219
+ exit 1
220
+ fi
221
+
222
+ # TODO(Steven): Check dockerimages pull in ubuntu
.github/workflows/issue_labeler.yml ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow automatically labels issues based on their content.
16
+ name: Issue Labeler
17
+ on:
18
+ # Trigger on new issues and edits to existing issues
19
+ issues:
20
+ types: [opened, edited]
21
+
22
+ permissions:
23
+ contents: read
24
+ issues: write
25
+
26
+ jobs:
27
+ label-issue:
28
+ name: Auto Label Issue
29
+ runs-on: ubuntu-latest
30
+ if: github.repository == 'huggingface/lerobot'
31
+ steps:
32
+ - uses: actions/github-script@v8
33
+ with:
34
+ script: |
35
+ // Setup Input Text
36
+ const body = (context.payload.issue.body || '');
37
+ const title = (context.payload.issue.title || '');
38
+ const cleanBody = body.replace(/```[\s\S]*?```/g, '');
39
+ const text = `${title}\n${cleanBody}`.toLowerCase();
40
+ const labelsToAdd = new Set();
41
+ const matches = (re) => re.test(text);
42
+
43
+ // Keyword Heuristics
44
+
45
+ if (matches(/\b(bug|error|crash|exception)\b/i)) labelsToAdd.add('bug');
46
+ if (matches(/\b(new feature|enhancement|improvement|proposal|feature request)\b/i)) labelsToAdd.add('enhancement');
47
+ if (matches(/\b(question|how to|clarify|explain|how do i|help me|question about)\b/i)) labelsToAdd.add('question');
48
+ if (matches(/\b(documentation|docs?|readme|tutorial|wiki|typo|docstring)\b/i)) labelsToAdd.add('documentation');
49
+ if (matches(/\b(example|sample|demo|notebook)s?\b/i)) labelsToAdd.add('examples');
50
+ if (matches(/\b(datasets?|data loader|data augmentation|data preprocessing)\b/i)) labelsToAdd.add('dataset');
51
+ if (matches(/\b(mujoco|isaac|simulation|sim)\b/i)) labelsToAdd.add('simulation');
52
+ if (matches(/\b(train|training|optimizer|gradient|wandb|sac)\b/i)) labelsToAdd.add('training');
53
+ if (matches(/\b(rerun|plot|render|rendering|visualizer)/i)) labelsToAdd.add('visualization');
54
+ if (matches(/\b(cameras?|opencv|realsense|lidars?|sensors?|imus?|microphones?|rgbd|encoders?)\b/i)) labelsToAdd.add('sensors');
55
+ if (matches(/\b(urdf|actuators?|calibration|end-effector|kinematics)\b/i)) labelsToAdd.add('robots');
56
+ if (matches(/\b(teleop|teleoperator|controller|leader|follower|joystick|gamepad)\b/i)) labelsToAdd.add('teleoperators');
57
+ if (matches(/\b(policy|policies|model?)\b/i)) labelsToAdd.add('policies');
58
+ if (matches(/\b(processor|pipeline|preprocessor|postprocessor)s?\b/i)) labelsToAdd.add('processor');
59
+ if (matches(/\b(eval|evaluate|evaluation|metrics?|score|benchmarks?)\b/i)) labelsToAdd.add('evaluation');
60
+ if (matches(/\b(tests?|pytest|unittest|failing test)\b/i)) labelsToAdd.add('tests');
61
+ if (matches(/\b(ci|github actions?|github workflows?|gha|docker|pypi)\b/i)) labelsToAdd.add('CI');
62
+ if (matches(/\b(perf|latency|throughput|fps|speed|performance|slow|fast|slower|faster|memory usage)\b/i)) labelsToAdd.add('performance');
63
+ if (matches(/\b(dependency|dependencies|pip|install error|importerror|package not found|pyproject)\b/i)) labelsToAdd.add('dependencies');
64
+ if (matches(/\b(configuration|config|arguments?|input feature|dracuss)\b/i)) labelsToAdd.add('configuration');
65
+
66
+ // Apply Labels
67
+ const labels = Array.from(labelsToAdd).filter(Boolean);
68
+
69
+ if (labels.length > 0) {
70
+ console.log(`Adding labels: ${labels.join(', ')}`);
71
+ await github.rest.issues.addLabels({
72
+ owner: context.repo.owner,
73
+ repo: context.repo.repo,
74
+ issue_number: context.issue.number,
75
+ labels,
76
+ });
77
+ }
.github/workflows/nightly.yml ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles nightly testing & docker images publishing.
16
+ name: Nightly
17
+ permissions:
18
+ contents: read
19
+
20
+ on:
21
+ # Allows running this workflow manually from the Actions tab
22
+ workflow_dispatch:
23
+
24
+ # Runs at 02:00
25
+ schedule:
26
+ - cron: "0 2 * * *"
27
+
28
+ # Sets up the environment variables
29
+ env:
30
+ UV_VERSION: "0.8.0"
31
+ PYTHON_VERSION: "3.10"
32
+ DOCKER_IMAGE_NAME_CPU: huggingface/lerobot-cpu:latest
33
+ DOCKER_IMAGE_NAME_GPU: huggingface/lerobot-gpu:latest
34
+
35
+ # Ensures that only the latest commit is built, canceling older runs.
36
+ concurrency:
37
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
38
+ cancel-in-progress: true
39
+
40
+ jobs:
41
+ # This job builds a CPU image for testing & distribution
42
+ build-docker-cpu-nightly:
43
+ name: Build CPU Docker for Nightly
44
+ runs-on:
45
+ group: aws-general-8-plus
46
+ if: github.repository == 'huggingface/lerobot'
47
+ outputs:
48
+ image_tag: ${{ env.DOCKER_IMAGE_NAME_CPU }}
49
+ steps:
50
+ - name: Install Git LFS
51
+ run: |
52
+ sudo apt-get update
53
+ sudo apt-get install git-lfs
54
+ git lfs install
55
+ - uses: actions/checkout@v6
56
+ with:
57
+ lfs: true
58
+ persist-credentials: false
59
+ - name: Set up Docker Buildx
60
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
61
+ with:
62
+ cache-binary: false
63
+ - name: Login to Docker Hub
64
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
65
+ with:
66
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
67
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
68
+ - name: Build and push Docker image CPU
69
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
70
+ with:
71
+ context: .
72
+ file: ./docker/Dockerfile.user
73
+ push: true
74
+ tags: ${{ env.DOCKER_IMAGE_NAME_CPU }}
75
+
76
+ # This job builds a GPU image for testing & distribution
77
+ build-docker-gpu-nightly:
78
+ name: Build GPU Docker for Nightly
79
+ runs-on:
80
+ group: aws-general-8-plus
81
+ if: github.repository == 'huggingface/lerobot'
82
+ outputs:
83
+ image_tag: ${{ env.DOCKER_IMAGE_NAME_GPU }}
84
+ steps:
85
+ - name: Install Git LFS
86
+ run: |
87
+ sudo apt-get update
88
+ sudo apt-get install git-lfs
89
+ git lfs install
90
+ - uses: actions/checkout@v6
91
+ with:
92
+ lfs: true
93
+ persist-credentials: false
94
+ - name: Set up Docker Buildx
95
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
96
+ with:
97
+ cache-binary: false
98
+ - name: Login to Docker Hub
99
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
100
+ with:
101
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
102
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
103
+ - name: Build and push Docker image GPU
104
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
105
+ with:
106
+ context: .
107
+ file: ./docker/Dockerfile.internal
108
+ push: true
109
+ tags: ${{ env.DOCKER_IMAGE_NAME_GPU }}
110
+
111
+ # This job runs the E2E tests + pytest with all extras in the CPU image
112
+ nightly-cpu-tests:
113
+ name: Nightly CPU Tests
114
+ needs: [build-docker-cpu-nightly]
115
+ runs-on:
116
+ group: aws-g6-4xlarge-plus
117
+ env:
118
+ HF_HOME: /home/user_lerobot/.cache/huggingface
119
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
120
+ TORCH_HOME: /home/user_lerobot/.cache/torch
121
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
122
+ container:
123
+ image: ${{ needs.build-docker-cpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
124
+ options: --shm-size "16gb"
125
+ credentials:
126
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
127
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
128
+ defaults:
129
+ run:
130
+ shell: bash
131
+ working-directory: /lerobot
132
+ steps:
133
+ - name: Run pytest on CPU
134
+ run: pytest tests -vv --maxfail=10
135
+ - name: Run end-to-end tests
136
+ run: make test-end-to-end
137
+
138
+ # This job runs the E2E tests + pytest with all extras in the GPU image
139
+ nightly-gpu-tests:
140
+ name: Nightly GPU Tests
141
+ needs: [build-docker-gpu-nightly]
142
+ runs-on:
143
+ group: aws-g6-4xlarge-plus
144
+ env:
145
+ HF_HOME: /home/user_lerobot/.cache/huggingface
146
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
147
+ TORCH_HOME: /home/user_lerobot/.cache/torch
148
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
149
+ container:
150
+ image: ${{ needs.build-docker-gpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
151
+ options: --gpus all --shm-size "16gb"
152
+ credentials:
153
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
154
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
155
+ defaults:
156
+ run:
157
+ shell: bash
158
+ working-directory: /lerobot
159
+ steps:
160
+ - name: Run pytest on GPU
161
+ run: pytest tests -vv --maxfail=10
162
+ - name: Run end-to-end tests
163
+ run: make test-end-to-end
164
+
165
+ # This job runs multi-GPU training tests with 4 GPUs
166
+ nightly-multi-gpu-tests:
167
+ name: Nightly Multi-GPU Tests
168
+ needs: [build-docker-gpu-nightly]
169
+ runs-on:
170
+ group: aws-g4dn-12xlarge # Instance with 4 GPUs
171
+ env:
172
+ HF_HOME: /home/user_lerobot/.cache/huggingface
173
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
174
+ TORCH_HOME: /home/user_lerobot/.cache/torch
175
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
176
+ CUDA_VISIBLE_DEVICES: "0,1,2,3"
177
+ container:
178
+ image: ${{ needs.build-docker-gpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
179
+ options: --gpus all --shm-size "16gb"
180
+ credentials:
181
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
182
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
183
+ defaults:
184
+ run:
185
+ shell: bash
186
+ working-directory: /lerobot
187
+ steps:
188
+ - name: Verify GPU availability
189
+ run: |
190
+ nvidia-smi
191
+ python -c "import torch; print(f'PyTorch CUDA available: {torch.cuda.is_available()}'); print(f'Number of GPUs: {torch.cuda.device_count()}')"
192
+
193
+ - name: Run multi-GPU training tests
194
+ # TODO(Steven): Investigate why motors tests are failing in multi-GPU setup
195
+ run: pytest tests -vv --maxfail=10 --ignore=tests/motors/
196
+ timeout-minutes: 10
.github/workflows/pr_labeler.yml ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow labels pull requests based on the files that were changed.
16
+ name: Pull Request Labeler
17
+
18
+ on:
19
+ # Allows labeling pull requests when they are opened or updated
20
+ # zizmor: ignore[dangerous-triggers] Needed to label PRs from forks
21
+ pull_request_target:
22
+ branches:
23
+ - main
24
+ types: [opened, synchronize, reopened, ready_for_review]
25
+
26
+ permissions:
27
+ contents: read
28
+ pull-requests: write
29
+
30
+ jobs:
31
+ triage:
32
+ name: Label PR
33
+ runs-on: ubuntu-latest
34
+ if: github.repository == 'huggingface/lerobot' && !github.event.pull_request.draft
35
+ steps:
36
+ - uses: actions/labeler@v6
37
+ with:
38
+ repo-token: ${{ secrets.GITHUB_TOKEN }}
39
+ sync-labels: true # Removes labels if files are removed from the PR
.github/workflows/quality.yml ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles linting, formatting, and static analysis checks for the codebase.
16
+ name: Quality
17
+ permissions:
18
+ contents: read
19
+
20
+ on:
21
+ # Allows running this workflow manually from the Actions tab
22
+ workflow_dispatch:
23
+
24
+ # Triggers the workflow on push events to main
25
+ push:
26
+ branches:
27
+ - main
28
+
29
+ # Triggers the workflow on pull request events targeting main
30
+ pull_request:
31
+ branches:
32
+ - main
33
+
34
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
35
+ concurrency:
36
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
37
+ cancel-in-progress: true
38
+
39
+ jobs:
40
+ # This job runs pre-commit hooks to check code style and formatting.
41
+ pre-commit-checks:
42
+ name: Run Pre-commit Hooks (Lint, Format & Static Analysis)
43
+ runs-on: ubuntu-latest
44
+ steps:
45
+ - name: Checkout code
46
+ uses: actions/checkout@v6
47
+ with:
48
+ persist-credentials: false
49
+
50
+ - name: Set up Python
51
+ uses: actions/setup-python@v6
52
+ with:
53
+ python-version: '3.10'
54
+
55
+ - name: Run pre-commit hooks
56
+ uses: pre-commit/action@v3.0.1 # zizmor: ignore[unpinned-uses]
57
+ with:
58
+ extra_args: --all-files --show-diff-on-failure --color=always
.github/workflows/release.yml ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ name: Create Release and Publish to PyPI
16
+
17
+ on:
18
+ push:
19
+ tags:
20
+ - 'v*.*.*' # Trigger on tags like v0.1.0, v1.0.0
21
+
22
+ # Sets up the environment variables
23
+ env:
24
+ UV_VERSION: "0.8.0"
25
+ PYTHON_VERSION: "3.10"
26
+
27
+ jobs:
28
+ # This job builds the Python package and publishes it to PyPI
29
+ build-and-publish:
30
+ name: Build and publish Python distributions
31
+ runs-on: ubuntu-latest
32
+ if: github.repository == 'huggingface/lerobot'
33
+ outputs:
34
+ version: ${{ steps.extract_info.outputs.tag_version }}
35
+ permissions:
36
+ contents: write
37
+ id-token: write
38
+
39
+ steps:
40
+ - name: Checkout code
41
+ uses: actions/checkout@v6
42
+ with:
43
+ persist-credentials: false
44
+
45
+ - name: Set up Python
46
+ uses: actions/setup-python@v6
47
+ with:
48
+ python-version: '3.10'
49
+
50
+ - name: Extract Version
51
+ id: extract_info
52
+ # Extract version from tag (e.g., v0.1.0 -> 0.1.0)
53
+ # zizmor: ignore[template-injection]
54
+ run: |
55
+ VERSION=${{ github.ref_name }}
56
+ VERSION_NUMBER=${VERSION#v}
57
+ echo "tag_version=$VERSION_NUMBER" >> $GITHUB_OUTPUT
58
+ - name: Check if version matches pyproject.toml
59
+ if: startsWith(github.ref, 'refs/tags/v') && !contains(github.ref, '-')
60
+ # zizmor: ignore[template-injection]
61
+ run: |
62
+ TAG_VERSION=${{ steps.extract_info.outputs.tag_version }}
63
+
64
+ PYPROJECT_VERSION=$(grep '^version = ' pyproject.toml | awk -F' = ' '{print $2}' | tr -d '"')
65
+
66
+ if [[ "$TAG_VERSION" != "$PYPROJECT_VERSION" ]]; then
67
+ echo "Error: Tag version ($TAG_VERSION) does not match pyproject.toml version ($PYPROJECT_VERSION)." >&2
68
+ exit 1
69
+ else
70
+ echo "Tag version matches pyproject.toml version: $TAG_VERSION. Proceeding with release."
71
+ fi
72
+
73
+ - name: Check if version exists on PyPI
74
+ # zizmor: ignore[template-injection]
75
+ run: |
76
+ NEW_VERSION=${{ steps.extract_info.outputs.tag_version }}
77
+
78
+ response=$(curl -s "https://pypi.org/pypi/lerobot/$NEW_VERSION/json")
79
+ if echo "$response" | grep -q "message"; then
80
+ echo "Version $NEW_VERSION is available on PyPI. Proceeding with release."
81
+ else
82
+ echo "Error: Version $NEW_VERSION already exists on PyPI. Aborting."
83
+ exit 1
84
+ fi
85
+
86
+ - name: Remove Tags with Git dependencies
87
+ # TODO(Steven): Temporary patch to remove pi from PyPi 0.4.0 release due to its reliance on git dependencies.
88
+ run: |
89
+ echo "::info:: Checking for Git dependencies to remove from pyproject.toml..."
90
+ grep -E '@ git\+https|lerobot\[pi\]' pyproject.toml | sed 's/^/::warning:: Removing line: /' || true
91
+ sed -E -i '/@ git\+https|lerobot\[pi\]/d' pyproject.toml
92
+ echo "::info:: Git dependencies removed. Proceeding with build."
93
+
94
+ - name: Install build dependencies
95
+ run: python -m pip install build
96
+
97
+ - name: Build package
98
+ run: python -m build
99
+
100
+ - name: Create GitHub Release
101
+ env:
102
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
103
+ # zizmor: ignore[template-injection]
104
+ run: |
105
+ gh release create ${{ github.ref_name }} \
106
+ --title "Release ${{ github.ref_name }}" \
107
+ --generate-notes \
108
+ --draft=$([[ "${{ github.ref_name }}" == *-* ]] && echo true || echo false) \
109
+ --prerelease=$([[ "${{ github.ref_name }}" == *-* ]] && echo true || echo false) \
110
+ ./dist/*
111
+
112
+ - name: Publish to TestPyPI for pre-releases
113
+ # True for tags like 'v0.2.0-rc1'
114
+ if: startsWith(github.ref, 'refs/tags/v') && contains(github.ref, '-')
115
+ uses: pypa/gh-action-pypi-publish@v1.13.0 # zizmor: ignore[unpinned-uses, use-trusted-publishing]
116
+ with:
117
+ repository-url: https://test.pypi.org/legacy/
118
+ verbose: true
119
+ print-hash: true
120
+
121
+ - name: Publish to PyPI
122
+ if: startsWith(github.ref, 'refs/tags/v') && !contains(github.ref, '-')
123
+ uses: pypa/gh-action-pypi-publish@v1.13.0 # zizmor: ignore[unpinned-uses, use-trusted-publishing]
124
+ with:
125
+ verbose: true
126
+ print-hash: true
127
+
128
+ # This job runs end-to-end tests on the release
129
+ test-release:
130
+ name: Test Release
131
+ needs: [build-and-publish]
132
+ runs-on: ubuntu-latest
133
+ permissions:
134
+ contents: read
135
+ env:
136
+ MUJOCO_GL: egl
137
+ steps:
138
+ - uses: actions/checkout@v6
139
+ with:
140
+ lfs: true
141
+ persist-credentials: false
142
+ - name: Install apt dependencies
143
+ run: |
144
+ sudo apt-get update && sudo apt-get install -y build-essential \
145
+ git curl libglib2.0-0 libegl1-mesa-dev ffmpeg libusb-1.0-0-dev \
146
+ speech-dispatcher libgeos-dev portaudio19-dev
147
+ - name: Setup uv and Python
148
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
149
+ with:
150
+ enable-cache: true # zizmor: ignore[cache-poisoning]
151
+ version: ${{ env.UV_VERSION }}
152
+ python-version: ${{ env.PYTHON_VERSION }}
153
+ - name: Create uv virtual environment
154
+ run: uv venv
155
+ - name: Install lerobot release
156
+ # zizmor: ignore[template-injection]
157
+ run: |
158
+ VERSION="${{ needs.build-and-publish.outputs.version }}"
159
+ if [[ "$VERSION" == *-* ]]; then
160
+ BASE_VERSION="${VERSION%%-*}"
161
+ echo "Installing pre-release version $BASE_VERSION from TestPyPI..."
162
+ uv pip install \
163
+ --index-url https://test.pypi.org/simple/ \
164
+ --extra-index-url https://pypi.org/simple \
165
+ --index-strategy unsafe-best-match \
166
+ "lerobot[all]==$BASE_VERSION"
167
+ else
168
+ echo "Installing release version $VERSION from PyPI..."
169
+ uv pip install "lerobot[all]==$VERSION"
170
+ fi
171
+ - name: Check lerobot version
172
+ run: uv run python -c "import lerobot; print(lerobot.__version__)"
173
+
174
+ - name: Run end-to-end tests
175
+ run: uv run make test-end-to-end
176
+
177
+
178
+ # TODO(Steven): Publish draft/pre-release and to test pypi weekly
179
+ # TODO(Steven): Separate build and publish job
.github/workflows/security.yml ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles secret scanning using TruffleHog to detect sensitive information in the codebase.
16
+ name: Security
17
+ permissions:
18
+ contents: read
19
+
20
+ on:
21
+ # Allows running this workflow manually from the Actions tab
22
+ workflow_dispatch:
23
+
24
+ # Triggers the workflow on push events to main
25
+ push:
26
+ branches:
27
+ - main
28
+
29
+ # Triggers the workflow on pull request events targeting main
30
+ pull_request:
31
+ branches:
32
+ - main
33
+
34
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
35
+ concurrency:
36
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
37
+ cancel-in-progress: true
38
+
39
+ jobs:
40
+ # This job runs TruffleHog to scan the full history of the repository for secrets.
41
+ trufflehog:
42
+ name: Secret Leaks Scan
43
+ runs-on: ubuntu-latest
44
+ steps:
45
+ - name: Checkout code
46
+ uses: actions/checkout@v6 # zizmor: ignore[unpinned-uses]
47
+ with:
48
+ fetch-depth: 0
49
+ persist-credentials: false
50
+
51
+ - name: Secret Scanning
52
+ uses: trufflesecurity/trufflehog@v3.90.0 # zizmor: ignore[unpinned-uses]
53
+ with:
54
+ extra_args: --only-verified
.github/workflows/stale.yml ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles closing stale issues and PRs.
16
+ name: Stale
17
+ on:
18
+ # Allows running this workflow manually from the Actions tab
19
+ workflow_dispatch:
20
+
21
+ # Runs at 02:00
22
+ schedule:
23
+ - cron: "0 2 * * *"
24
+
25
+ env:
26
+ CLOSE_ISSUE_MESSAGE: >
27
+ This issue was closed because it has been stalled for 14 days with no activity.
28
+ Feel free to reopen if is still relevant, or to ping a collaborator if you have any questions.
29
+ CLOSE_PR_MESSAGE: >
30
+ This PR was closed because it has been stalled for 21 days with no activity.
31
+ Feel free to reopen if is still relevant, or to ping a collaborator if you have any questions.
32
+ WARN_ISSUE_MESSAGE: >
33
+ This issue has been automatically marked as stale because it has not had
34
+ recent activity (6 months). It will be closed if no further activity occurs.
35
+ Any change, comment or update to this issue will reset this count.
36
+ Thank you for your contributions.
37
+ WARN_PR_MESSAGE: >
38
+ This PR has been automatically marked as stale because it has not had
39
+ recent activity (1 year). It will be closed if no further activity occurs.
40
+ Any change, comment or update to this PR will reset this count.
41
+ Thank you for your contributions.
42
+
43
+ jobs:
44
+ # This job runs the actions/stale action to close stale issues and PRs.
45
+ stale:
46
+ name: Close Stale Issues and PRs
47
+ runs-on: ubuntu-latest
48
+ if: github.repository == 'huggingface/lerobot'
49
+ permissions:
50
+ actions: write
51
+ contents: write # only for delete-branch option
52
+ issues: write
53
+ pull-requests: write
54
+ steps:
55
+ - uses: actions/stale@v10
56
+ with:
57
+ repo-token: ${{ secrets.GITHUB_TOKEN }}
58
+ stale-issue-label: stale
59
+ stale-pr-label: stale
60
+ exempt-issue-labels: never-stale
61
+ exempt-pr-labels: never-stale
62
+ days-before-issue-stale: 180
63
+ days-before-issue-close: 14
64
+ days-before-pr-stale: 365
65
+ days-before-pr-close: 21
66
+ delete-branch: true
67
+ close-issue-message: ${{ env.CLOSE_ISSUE_MESSAGE }}
68
+ close-pr-message: ${{ env.CLOSE_PR_MESSAGE }}
69
+ stale-issue-message: ${{ env.WARN_ISSUE_MESSAGE }}
70
+ stale-pr-message: ${{ env.WARN_PR_MESSAGE }}
71
+ operations-per-run: 500
.github/workflows/unbound_deps_tests.yml ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles full testing with unboud dependencies versions.
16
+ name: Unbound Dependency Tests
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+
22
+ # Run on the 1st and 15th of every month at 09:00 UTC
23
+ # schedule:
24
+ # - cron: '0 2 1,15 * *'
25
+
26
+ permissions:
27
+ contents: read
28
+
29
+ # Sets up the environment variables
30
+ env:
31
+ UV_VERSION: "0.8.0"
32
+ PYTHON_VERSION: "3.10"
33
+ DOCKER_IMAGE_NAME: huggingface/lerobot-gpu:unbound
34
+
35
+ # Ensures that only the latest action is built, canceling older runs.
36
+ concurrency:
37
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
38
+ cancel-in-progress: true
39
+
40
+ jobs:
41
+
42
+ # This job runs the E2E tests + pytest with all unbound extras
43
+ full-tests:
44
+ name: Full Unbound Tests
45
+ runs-on: ubuntu-latest
46
+ if: github.repository == 'huggingface/lerobot'
47
+ env:
48
+ MUJOCO_GL: egl
49
+ HF_HOME: /mnt/cache/.cache/huggingface
50
+ HF_LEROBOT_HOME: /mnt/cache/.cache/huggingface/lerobot
51
+ steps:
52
+ - uses: actions/checkout@v6
53
+ with:
54
+ lfs: true
55
+ persist-credentials: false
56
+
57
+ # NOTE(Steven): Mount to `/mnt` to avoid the limited storage on `/home`. Consider cleaning default SDKs or using self-hosted runners for more space.
58
+ # (As of 2024-06-10, the runner's `/home` has only 6.2 GB free—8% of its 72 GB total.)
59
+ - name: Setup /mnt storage
60
+ run: sudo chown -R $USER:$USER /mnt
61
+
62
+ - name: Install apt dependencies
63
+ run: |
64
+ sudo apt-get update && sudo apt-get install -y build-essential \
65
+ git curl libglib2.0-0 libegl1-mesa-dev ffmpeg libusb-1.0-0-dev \
66
+ speech-dispatcher libgeos-dev portaudio19-dev
67
+
68
+ - name: Setup uv and Python
69
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
70
+ with:
71
+ enable-cache: true
72
+ version: ${{ env.UV_VERSION }}
73
+ python-version: ${{ env.PYTHON_VERSION }}
74
+
75
+ - name: Unbound dependencies
76
+ run: |
77
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml
78
+ echo "Dependencies unbound:" && cat pyproject.toml
79
+
80
+ - name: Install lerobot with all extras
81
+ run: uv sync --extra all # TODO(Steven): Make flash-attn optional
82
+
83
+ - name: Run pytest (all extras)
84
+ run: uv run pytest tests -vv
85
+
86
+ - name: Run end-to-end tests
87
+ run: uv run make test-end-to-end
88
+
89
+ # This job builds a GPU enabled image for testing
90
+ build-and-push-docker:
91
+ name: Build and Push Docker
92
+ runs-on:
93
+ group: aws-general-8-plus
94
+ if: github.repository == 'huggingface/lerobot'
95
+ outputs:
96
+ image_tag: ${{ env.DOCKER_IMAGE_NAME }}
97
+ env:
98
+ GITHUB_REF: ${{ github.ref }}
99
+ steps:
100
+ - name: Install Git LFS
101
+ run: |
102
+ sudo apt-get update
103
+ sudo apt-get install git-lfs
104
+ git lfs install
105
+ - uses: actions/checkout@v6
106
+ with:
107
+ lfs: true
108
+ persist-credentials: false
109
+ - name: Set up Docker Buildx
110
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
111
+ with:
112
+ cache-binary: false
113
+ - name: Login to Docker Hub
114
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
115
+ with:
116
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
117
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
118
+ - name: Build and push Docker image
119
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
120
+ with:
121
+ context: .
122
+ file: ./docker/Dockerfile.internal
123
+ push: true
124
+ tags: ${{ env.DOCKER_IMAGE_NAME }}
125
+ build-args: |
126
+ UNBOUND_DEPS=true
127
+
128
+ # This job runs pytest with all unbound extras in a GPU enabled host
129
+ # It runs everytime a test image is created
130
+ gpu-tests:
131
+ name: GPU Unbound Tests
132
+ needs: [build-and-push-docker]
133
+ runs-on:
134
+ group: aws-g6-4xlarge-plus
135
+ env:
136
+ HF_HOME: /home/user_lerobot/.cache/huggingface
137
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
138
+ TORCH_HOME: /home/user_lerobot/.cache/torch
139
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
140
+ container:
141
+ image: ${{ needs.build-and-push-docker.outputs.image_tag }} # zizmor: ignore[unpinned-images]
142
+ options: --gpus all --shm-size "16gb"
143
+ credentials:
144
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
145
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
146
+ defaults:
147
+ run:
148
+ shell: bash
149
+ working-directory: /lerobot
150
+ steps:
151
+ - name: Run pytest on GPU
152
+ run: pytest tests -vv
153
+ - name: Run end-to-end tests
154
+ run: make test-end-to-end
155
+
156
+ # This job deletes the test image recently created
157
+ # It runs everytime after the gpu-tests have finished
158
+ delete-unbound-image:
159
+ name: Delete Unbound Image
160
+ needs: [gpu-tests, build-and-push-docker]
161
+ if: always() && needs.build-and-push-docker.result == 'success'
162
+ runs-on: ubuntu-latest
163
+ steps:
164
+ - name: Get Docker Hub Token and Delete Image
165
+ # zizmor: ignore[template-injection]
166
+ env:
167
+ DOCKERHUB_LEROBOT_USERNAME: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
168
+ DOCKERHUB_LEROBOT_PASSWORD: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
169
+ IMAGE_FULL: ${{ needs.build-and-push-docker.outputs.image_tag }}
170
+ run: |
171
+ IMAGE_NAME=$(echo "$IMAGE_FULL" | cut -d':' -f1)
172
+ IMAGE_TAG=$(echo "$IMAGE_FULL" | cut -d':' -f2)
173
+
174
+ echo "Attempting to delete image: $IMAGE_NAME:$IMAGE_TAG"
175
+
176
+ TOKEN=$(curl -s -H "Content-Type: application/json" \
177
+ -X POST \
178
+ -d "{\"username\": \"$DOCKERHUB_LEROBOT_USERNAME\", \"password\": \"$DOCKERHUB_LEROBOT_PASSWORD\"}" \
179
+ https://hub.docker.com/v2/users/login/ | jq -r .token)
180
+
181
+ if [ "$TOKEN" == "null" ] || [ -z "$TOKEN" ]; then
182
+ echo "::error::Failed to get Docker Hub token."
183
+ exit 1
184
+ fi
185
+
186
+ HTTP_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
187
+ -H "Authorization: JWT ${TOKEN}" \
188
+ -X DELETE \
189
+ https://hub.docker.com/v2/repositories/${IMAGE_NAME}/tags/$IMAGE_TAG)
190
+
191
+ if [ "$HTTP_RESPONSE" -eq 204 ]; then
192
+ echo "Successfully deleted Docker image tag: $IMAGE_NAME:$IMAGE_TAG"
193
+ else
194
+ echo "::error::Failed to delete Docker image. HTTP status: $HTTP_RESPONSE"
195
+ exit 1
196
+ fi
.gitignore ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ ### Environments & Dependencies ###
16
+ .env
17
+ .venv
18
+ env/
19
+ venv/
20
+ env.bak/
21
+ venv.bak/
22
+ .python-version
23
+ __pypackages__/
24
+ node_modules/
25
+
26
+ # Lock files
27
+ poetry.lock
28
+ uv.lock
29
+ Pipfile.lock
30
+
31
+ ### Build & Distribution ###
32
+ build/
33
+ dist/
34
+ sdist/
35
+ wheels/
36
+ downloads/
37
+ eggs/
38
+ .eggs/
39
+ parts/
40
+ var/
41
+ pip-wheel-metadata/
42
+ share/python-wheels/
43
+ develop-eggs/
44
+ *.egg-info/
45
+ .installed.cfg
46
+ *.egg
47
+ MANIFEST
48
+ lib/
49
+ lib64/
50
+
51
+ # PyInstaller
52
+ *.manifest
53
+ *.spec
54
+
55
+ ### Compiled & Cached Files ###
56
+ __pycache__/
57
+ *.py[cod]
58
+ *$py.class
59
+ *.so
60
+ *.sage.py
61
+ .cache/
62
+ .ruff_cache/
63
+ .mypy_cache/
64
+ .pyre/
65
+ .pytype/
66
+ cython_debug/
67
+
68
+ ### Testing & Coverage ###
69
+ htmlcov/
70
+ .tox/
71
+ .nox/
72
+ .coverage
73
+ .coverage.*
74
+ .pytest_cache/
75
+ .hypothesis/
76
+ nosetests.xml
77
+ coverage.xml
78
+ *.cover
79
+ *.py,cover
80
+ !tests/artifacts
81
+
82
+ ### Logs & Temporary Files ###
83
+ logs/
84
+ tmp/
85
+ *.log
86
+ pip-log.txt
87
+ pip-delete-this-directory.txt
88
+ celerybeat-schedule
89
+ celerybeat.pid
90
+
91
+ ### IDE & Editor Config ###
92
+ # VS Code
93
+ .vscode/
94
+ .devcontainer/
95
+
96
+ # JetBrains / PyCharm
97
+ .idea/
98
+
99
+ # Spyder
100
+ .spyderproject
101
+ .spyproject
102
+
103
+ # Rope
104
+ .ropeproject
105
+
106
+ # Vim
107
+ *.swp
108
+
109
+ # Other
110
+ *~
111
+
112
+ ### OS Specific ###
113
+ # macOS
114
+ .DS_Store
115
+
116
+ # Windows
117
+ Thumbs.db
118
+
119
+ ### Framework & Tool Specific ###
120
+
121
+ .Python
122
+
123
+ # Django
124
+ local_settings.py
125
+ db.sqlite3
126
+ db.sqlite3-journal
127
+
128
+ # Flask
129
+ instance/
130
+ .webassets-cache
131
+
132
+ # Scrapy
133
+ .scrapy
134
+
135
+ # Jupyter
136
+ .ipynb_checkpoints/
137
+ profile_default/
138
+ ipython_config.py
139
+
140
+ # Sphinx
141
+ docs/_build/
142
+
143
+ # MkDocs
144
+ /site
145
+
146
+ # PyBuilder
147
+ .pybuilder/
148
+ target/
149
+
150
+ # mypy
151
+ .dmypy.json
152
+ dmypy.json
153
+
154
+ ### HPC & Slurm ###
155
+ nautilus/*.yaml
156
+ *.key
157
+ sbatch*.sh
158
+
159
+ ### Miscellaneous ###
160
+ # W&B
161
+ wandb/
162
+
163
+ # Dev scripts
164
+ .dev/
165
+
166
+ # Data folders
167
+ data/
168
+ outputs/
169
+
170
+ # Translations
171
+ *.mo
172
+ *.pot
173
+
174
+ # Dev folders
175
+ .cache/*
176
+ *.stl
177
+ *.urdf
178
+ *.xml
179
+ *.part
.pre-commit-config.yaml ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ default_language_version:
16
+ python: python3.10
17
+
18
+ exclude: "tests/artifacts/.*\\.safetensors$"
19
+
20
+ repos:
21
+ ##### Meta #####
22
+ - repo: meta
23
+ hooks:
24
+ - id: check-useless-excludes
25
+ - id: check-hooks-apply
26
+
27
+ ##### General Code Quality & Formatting #####
28
+ - repo: https://github.com/pre-commit/pre-commit-hooks
29
+ rev: v6.0.0
30
+ hooks:
31
+ - id: check-added-large-files
32
+ args: ['--maxkb=1024']
33
+ - id: debug-statements
34
+ - id: check-merge-conflict
35
+ - id: check-case-conflict
36
+ - id: check-yaml
37
+ - id: check-toml
38
+ - id: end-of-file-fixer
39
+ - id: trailing-whitespace
40
+
41
+ - repo: https://github.com/astral-sh/ruff-pre-commit
42
+ rev: v0.14.1
43
+ hooks:
44
+ - id: ruff-format
45
+ - id: ruff
46
+ args: [--fix, --exit-non-zero-on-fix]
47
+
48
+ - repo: https://github.com/adhtruong/mirrors-typos
49
+ rev: v1.38.1
50
+ hooks:
51
+ - id: typos
52
+ args: [--force-exclude]
53
+
54
+ - repo: https://github.com/asottile/pyupgrade
55
+ rev: v3.21.0
56
+ hooks:
57
+ - id: pyupgrade
58
+ args: [--py310-plus]
59
+
60
+ ##### Markdown Quality #####
61
+ - repo: https://github.com/rbubley/mirrors-prettier
62
+ rev: v3.6.2
63
+ hooks:
64
+ - id: prettier
65
+ name: Format Markdown with Prettier
66
+ types_or: [markdown, mdx]
67
+ args: [--prose-wrap=preserve]
68
+
69
+ ##### Security #####
70
+ - repo: https://github.com/gitleaks/gitleaks
71
+ rev: v8.28.0
72
+ hooks:
73
+ - id: gitleaks
74
+
75
+ - repo: https://github.com/woodruffw/zizmor-pre-commit
76
+ rev: v1.15.2
77
+ hooks:
78
+ - id: zizmor
79
+
80
+ - repo: https://github.com/PyCQA/bandit
81
+ rev: 1.8.6
82
+ hooks:
83
+ - id: bandit
84
+ args: ["-c", "pyproject.toml"]
85
+ additional_dependencies: ["bandit[toml]"]
86
+
87
+ # TODO(Steven): Uncomment when ready to use
88
+ ##### Static Analysis & Typing #####
89
+ - repo: https://github.com/pre-commit/mirrors-mypy
90
+ rev: v1.19.1
91
+ hooks:
92
+ - id: mypy
93
+ args: [--config-file=pyproject.toml]
94
+ exclude: ^(examples|benchmarks|tests)/
95
+
96
+ ##### Docstring Checks #####
97
+ # - repo: https://github.com/akaihola/darglint2
98
+ # rev: v1.8.2
99
+ # hooks:
100
+ # - id: darglint2
101
+ # args: ["--docstring-style", "google", "-v", "2"]
102
+ # exclude: ^tests/.*$
103
+
104
+ # - repo: https://github.com/econchick/interrogate
105
+ # rev: 1.7.0
106
+ # hooks:
107
+ # - id: interrogate
108
+ # args: ["-vv", "--config=pyproject.toml"]
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our
6
+ community a harassment-free experience for everyone, regardless of age, body
7
+ size, visible or invisible disability, ethnicity, sex characteristics, gender
8
+ identity and expression, level of experience, education, socio-economic status,
9
+ nationality, personal appearance, race, caste, color, religion, or sexual
10
+ identity and orientation.
11
+
12
+ We pledge to act and interact in ways that contribute to an open, welcoming,
13
+ diverse, inclusive, and healthy community.
14
+
15
+ ## Our Standards
16
+
17
+ Examples of behavior that contributes to a positive environment for our
18
+ community include:
19
+
20
+ - Demonstrating empathy and kindness toward other people
21
+ - Being respectful of differing opinions, viewpoints, and experiences
22
+ - Giving and gracefully accepting constructive feedback
23
+ - Accepting responsibility and apologizing to those affected by our mistakes,
24
+ and learning from the experience
25
+ - Focusing on what is best not just for us as individuals, but for the overall
26
+ community
27
+
28
+ Examples of unacceptable behavior include:
29
+
30
+ - The use of sexualized language or imagery, and sexual attention or advances of
31
+ any kind
32
+ - Trolling, insulting or derogatory comments, and personal or political attacks
33
+ - Public or private harassment
34
+ - Publishing others' private information, such as a physical or email address,
35
+ without their explicit permission
36
+ - Other conduct which could reasonably be considered inappropriate in a
37
+ professional setting
38
+
39
+ ## Enforcement Responsibilities
40
+
41
+ Community leaders are responsible for clarifying and enforcing our standards of
42
+ acceptable behavior and will take appropriate and fair corrective action in
43
+ response to any behavior that they deem inappropriate, threatening, offensive,
44
+ or harmful.
45
+
46
+ Community leaders have the right and responsibility to remove, edit, or reject
47
+ comments, commits, code, wiki edits, issues, and other contributions that are
48
+ not aligned to this Code of Conduct, and will communicate reasons for moderation
49
+ decisions when appropriate.
50
+
51
+ ## Scope
52
+
53
+ This Code of Conduct applies within all community spaces, and also applies when
54
+ an individual is officially representing the community in public spaces.
55
+ Examples of representing our community include using an official e-mail address,
56
+ posting via an official social media account, or acting as an appointed
57
+ representative at an online or offline event.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported to the community leaders responsible for enforcement at
63
+ feedback@huggingface.co.
64
+ All complaints will be reviewed and investigated promptly and fairly.
65
+
66
+ All community leaders are obligated to respect the privacy and security of the
67
+ reporter of any incident.
68
+
69
+ ## Enforcement Guidelines
70
+
71
+ Community leaders will follow these Community Impact Guidelines in determining
72
+ the consequences for any action they deem in violation of this Code of Conduct:
73
+
74
+ ### 1. Correction
75
+
76
+ **Community Impact**: Use of inappropriate language or other behavior deemed
77
+ unprofessional or unwelcome in the community.
78
+
79
+ **Consequence**: A private, written warning from community leaders, providing
80
+ clarity around the nature of the violation and an explanation of why the
81
+ behavior was inappropriate. A public apology may be requested.
82
+
83
+ ### 2. Warning
84
+
85
+ **Community Impact**: A violation through a single incident or series of
86
+ actions.
87
+
88
+ **Consequence**: A warning with consequences for continued behavior. No
89
+ interaction with the people involved, including unsolicited interaction with
90
+ those enforcing the Code of Conduct, for a specified period of time. This
91
+ includes avoiding interactions in community spaces as well as external channels
92
+ like social media. Violating these terms may lead to a temporary or permanent
93
+ ban.
94
+
95
+ ### 3. Temporary Ban
96
+
97
+ **Community Impact**: A serious violation of community standards, including
98
+ sustained inappropriate behavior.
99
+
100
+ **Consequence**: A temporary ban from any sort of interaction or public
101
+ communication with the community for a specified period of time. No public or
102
+ private interaction with the people involved, including unsolicited interaction
103
+ with those enforcing the Code of Conduct, is allowed during this period.
104
+ Violating these terms may lead to a permanent ban.
105
+
106
+ ### 4. Permanent Ban
107
+
108
+ **Community Impact**: Demonstrating a pattern of violation of community
109
+ standards, including sustained inappropriate behavior, harassment of an
110
+ individual, or aggression toward or disparagement of classes of individuals.
111
+
112
+ **Consequence**: A permanent ban from any sort of public interaction within the
113
+ community.
114
+
115
+ ## Attribution
116
+
117
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118
+ version 2.1, available at
119
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
120
+
121
+ Community Impact Guidelines were inspired by
122
+ [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
123
+
124
+ For answers to common questions about this code of conduct, see the FAQ at
125
+ [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
126
+ [https://www.contributor-covenant.org/translations][translations].
127
+
128
+ [homepage]: https://www.contributor-covenant.org
129
+ [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
130
+ [Mozilla CoC]: https://github.com/mozilla/diversity
131
+ [FAQ]: https://www.contributor-covenant.org/faq
132
+ [translations]: https://www.contributor-covenant.org/translations
CONTRIBUTING.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to contribute to 🤗 LeRobot
2
+
3
+ Everyone is welcome to contribute, and we value everybody's contribution. Code is not the only way to help the community. Answering questions, helping others, reaching out, and improving the documentation are immensely valuable.
4
+
5
+ Whichever way you choose to contribute, please be mindful to respect our [code of conduct](./CODE_OF_CONDUCT.md).
6
+
7
+ ## Ways to Contribute
8
+
9
+ You can contribute in many ways:
10
+
11
+ - **Fixing issues:** Resolve bugs or improve existing code.
12
+ - **New features:** Develop new features.
13
+ - **Extend:** Implement new models/policies, robots, or simulation environments and upload datasets to the Hugging Face Hub.
14
+ - **Documentation:** Improve examples, guides, and docstrings.
15
+ - **Feedback:** Submit tickets related to bugs or desired new features.
16
+
17
+ If you are unsure where to start, join our [Discord Channel](https://discord.gg/q8Dzzpym3f).
18
+
19
+ ## Development Setup
20
+
21
+ To contribute code, you need to set up a development environment.
22
+
23
+ ### 1. Fork and Clone
24
+
25
+ Fork the repository on GitHub, then clone your fork:
26
+
27
+ ```bash
28
+ git clone https://github.com/<your-handle>/lerobot.git
29
+ cd lerobot
30
+ git remote add upstream https://github.com/huggingface/lerobot.git
31
+ ```
32
+
33
+ ### 2. Environment Installation
34
+
35
+ Please follow our [Installation Guide](./docs/source/installation.mdx) for the environment setup & installation from source.
36
+
37
+ ## Running Tests & Quality Checks
38
+
39
+ ### Code Style (Pre-commit)
40
+
41
+ Install `pre-commit` hooks to run checks automatically before you commit:
42
+
43
+ ```bash
44
+ pre-commit install
45
+ ```
46
+
47
+ To run checks manually on all files:
48
+
49
+ ```bash
50
+ pre-commit run --all-files
51
+ ```
52
+
53
+ ### Running Tests
54
+
55
+ We use `pytest`. First, ensure you have test artifacts by installing **git-lfs**:
56
+
57
+ ```bash
58
+ git lfs install
59
+ git lfs pull
60
+ ```
61
+
62
+ Run the full suite (this may require extras installed):
63
+
64
+ ```bash
65
+ pytest -sv ./tests
66
+ ```
67
+
68
+ Or run a specific test file during development:
69
+
70
+ ```bash
71
+ pytest -sv tests/test_specific_feature.py
72
+ ```
73
+
74
+ ## Submitting Issues & Pull Requests
75
+
76
+ Use the templates for required fields and examples.
77
+
78
+ - **Issues:** Follow the [ticket template](./.github/ISSUE_TEMPLATE/bug-report.yml).
79
+ - **Pull requests:** Rebase on `upstream/main`, use a descriptive branch (don't work on `main`), run `pre-commit` and tests locally, and follow the [PR template](./.github/PULL_REQUEST_TEMPLATE.md).
80
+
81
+ One member of the LeRobot team will then review your contribution.
82
+
83
+ Thank you for contributing to LeRobot!
LICENSE ADDED
@@ -0,0 +1,507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright 2024 The Hugging Face team. All rights reserved.
2
+
3
+ Apache License
4
+ Version 2.0, January 2004
5
+ http://www.apache.org/licenses/
6
+
7
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
8
+
9
+ 1. Definitions.
10
+
11
+ "License" shall mean the terms and conditions for use, reproduction,
12
+ and distribution as defined by Sections 1 through 9 of this document.
13
+
14
+ "Licensor" shall mean the copyright owner or entity authorized by
15
+ the copyright owner that is granting the License.
16
+
17
+ "Legal Entity" shall mean the union of the acting entity and all
18
+ other entities that control, are controlled by, or are under common
19
+ control with that entity. For the purposes of this definition,
20
+ "control" means (i) the power, direct or indirect, to cause the
21
+ direction or management of such entity, whether by contract or
22
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
23
+ outstanding shares, or (iii) beneficial ownership of such entity.
24
+
25
+ "You" (or "Your") shall mean an individual or Legal Entity
26
+ exercising permissions granted by this License.
27
+
28
+ "Source" form shall mean the preferred form for making modifications,
29
+ including but not limited to software source code, documentation
30
+ source, and configuration files.
31
+
32
+ "Object" form shall mean any form resulting from mechanical
33
+ transformation or translation of a Source form, including but
34
+ not limited to compiled object code, generated documentation,
35
+ and conversions to other media types.
36
+
37
+ "Work" shall mean the work of authorship, whether in Source or
38
+ Object form, made available under the License, as indicated by a
39
+ copyright notice that is included in or attached to the work
40
+ (an example is provided in the Appendix below).
41
+
42
+ "Derivative Works" shall mean any work, whether in Source or Object
43
+ form, that is based on (or derived from) the Work and for which the
44
+ editorial revisions, annotations, elaborations, or other modifications
45
+ represent, as a whole, an original work of authorship. For the purposes
46
+ of this License, Derivative Works shall not include works that remain
47
+ separable from, or merely link (or bind by name) to the interfaces of,
48
+ the Work and Derivative Works thereof.
49
+
50
+ "Contribution" shall mean any work of authorship, including
51
+ the original version of the Work and any modifications or additions
52
+ to that Work or Derivative Works thereof, that is intentionally
53
+ submitted to Licensor for inclusion in the Work by the copyright owner
54
+ or by an individual or Legal Entity authorized to submit on behalf of
55
+ the copyright owner. For the purposes of this definition, "submitted"
56
+ means any form of electronic, verbal, or written communication sent
57
+ to the Licensor or its representatives, including but not limited to
58
+ communication on electronic mailing lists, source code control systems,
59
+ and issue tracking systems that are managed by, or on behalf of, the
60
+ Licensor for the purpose of discussing and improving the Work, but
61
+ excluding communication that is conspicuously marked or otherwise
62
+ designated in writing by the copyright owner as "Not a Contribution."
63
+
64
+ "Contributor" shall mean Licensor and any individual or Legal Entity
65
+ on behalf of whom a Contribution has been received by Licensor and
66
+ subsequently incorporated within the Work.
67
+
68
+ 2. Grant of Copyright License. Subject to the terms and conditions of
69
+ this License, each Contributor hereby grants to You a perpetual,
70
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
71
+ copyright license to reproduce, prepare Derivative Works of,
72
+ publicly display, publicly perform, sublicense, and distribute the
73
+ Work and such Derivative Works in Source or Object form.
74
+
75
+ 3. Grant of Patent License. Subject to the terms and conditions of
76
+ this License, each Contributor hereby grants to You a perpetual,
77
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
78
+ (except as stated in this section) patent license to make, have made,
79
+ use, offer to sell, sell, import, and otherwise transfer the Work,
80
+ where such license applies only to those patent claims licensable
81
+ by such Contributor that are necessarily infringed by their
82
+ Contribution(s) alone or by combination of their Contribution(s)
83
+ with the Work to which such Contribution(s) was submitted. If You
84
+ institute patent litigation against any entity (including a
85
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
86
+ or a Contribution incorporated within the Work constitutes direct
87
+ or contributory patent infringement, then any patent licenses
88
+ granted to You under this License for that Work shall terminate
89
+ as of the date such litigation is filed.
90
+
91
+ 4. Redistribution. You may reproduce and distribute copies of the
92
+ Work or Derivative Works thereof in any medium, with or without
93
+ modifications, and in Source or Object form, provided that You
94
+ meet the following conditions:
95
+
96
+ (a) You must give any other recipients of the Work or
97
+ Derivative Works a copy of this License; and
98
+
99
+ (b) You must cause any modified files to carry prominent notices
100
+ stating that You changed the files; and
101
+
102
+ (c) You must retain, in the Source form of any Derivative Works
103
+ that You distribute, all copyright, patent, trademark, and
104
+ attribution notices from the Source form of the Work,
105
+ excluding those notices that do not pertain to any part of
106
+ the Derivative Works; and
107
+
108
+ (d) If the Work includes a "NOTICE" text file as part of its
109
+ distribution, then any Derivative Works that You distribute must
110
+ include a readable copy of the attribution notices contained
111
+ within such NOTICE file, excluding those notices that do not
112
+ pertain to any part of the Derivative Works, in at least one
113
+ of the following places: within a NOTICE text file distributed
114
+ as part of the Derivative Works; within the Source form or
115
+ documentation, if provided along with the Derivative Works; or,
116
+ within a display generated by the Derivative Works, if and
117
+ wherever such third-party notices normally appear. The contents
118
+ of the NOTICE file are for informational purposes only and
119
+ do not modify the License. You may add Your own attribution
120
+ notices within Derivative Works that You distribute, alongside
121
+ or as an addendum to the NOTICE text from the Work, provided
122
+ that such additional attribution notices cannot be construed
123
+ as modifying the License.
124
+
125
+ You may add Your own copyright statement to Your modifications and
126
+ may provide additional or different license terms and conditions
127
+ for use, reproduction, or distribution of Your modifications, or
128
+ for any such Derivative Works as a whole, provided Your use,
129
+ reproduction, and distribution of the Work otherwise complies with
130
+ the conditions stated in this License.
131
+
132
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
133
+ any Contribution intentionally submitted for inclusion in the Work
134
+ by You to the Licensor shall be under the terms and conditions of
135
+ this License, without any additional terms or conditions.
136
+ Notwithstanding the above, nothing herein shall supersede or modify
137
+ the terms of any separate license agreement you may have executed
138
+ with Licensor regarding such Contributions.
139
+
140
+ 6. Trademarks. This License does not grant permission to use the trade
141
+ names, trademarks, service marks, or product names of the Licensor,
142
+ except as required for reasonable and customary use in describing the
143
+ origin of the Work and reproducing the content of the NOTICE file.
144
+
145
+ 7. Disclaimer of Warranty. Unless required by applicable law or
146
+ agreed to in writing, Licensor provides the Work (and each
147
+ Contributor provides its Contributions) on an "AS IS" BASIS,
148
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
149
+ implied, including, without limitation, any warranties or conditions
150
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
151
+ PARTICULAR PURPOSE. You are solely responsible for determining the
152
+ appropriateness of using or redistributing the Work and assume any
153
+ risks associated with Your exercise of permissions under this License.
154
+
155
+ 8. Limitation of Liability. In no event and under no legal theory,
156
+ whether in tort (including negligence), contract, or otherwise,
157
+ unless required by applicable law (such as deliberate and grossly
158
+ negligent acts) or agreed to in writing, shall any Contributor be
159
+ liable to You for damages, including any direct, indirect, special,
160
+ incidental, or consequential damages of any character arising as a
161
+ result of this License or out of the use or inability to use the
162
+ Work (including but not limited to damages for loss of goodwill,
163
+ work stoppage, computer failure or malfunction, or any and all
164
+ other commercial damages or losses), even if such Contributor
165
+ has been advised of the possibility of such damages.
166
+
167
+ 9. Accepting Warranty or Additional Liability. While redistributing
168
+ the Work or Derivative Works thereof, You may choose to offer,
169
+ and charge a fee for, acceptance of support, warranty, indemnity,
170
+ or other liability obligations and/or rights consistent with this
171
+ License. However, in accepting such obligations, You may act only
172
+ on Your own behalf and on Your sole responsibility, not on behalf
173
+ of any other Contributor, and only if You agree to indemnify,
174
+ defend, and hold each Contributor harmless for any liability
175
+ incurred by, or claims asserted against, such Contributor by reason
176
+ of your accepting any such warranty or additional liability.
177
+
178
+ END OF TERMS AND CONDITIONS
179
+
180
+ APPENDIX: How to apply the Apache License to your work.
181
+
182
+ To apply the Apache License to your work, attach the following
183
+ boilerplate notice, with the fields enclosed by brackets "[]"
184
+ replaced with your own identifying information. (Don't include
185
+ the brackets!) The text should be enclosed in the appropriate
186
+ comment syntax for the file format. We also recommend that a
187
+ file or class name and description of purpose be included on the
188
+ same "printed page" as the copyright notice for easier
189
+ identification within third-party archives.
190
+
191
+ Copyright [yyyy] [name of copyright owner]
192
+
193
+ Licensed under the Apache License, Version 2.0 (the "License");
194
+ you may not use this file except in compliance with the License.
195
+ You may obtain a copy of the License at
196
+
197
+ http://www.apache.org/licenses/LICENSE-2.0
198
+
199
+ Unless required by applicable law or agreed to in writing, software
200
+ distributed under the License is distributed on an "AS IS" BASIS,
201
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
202
+ See the License for the specific language governing permissions and
203
+ limitations under the License.
204
+
205
+
206
+ ## Some of lerobot's code is derived from Diffusion Policy, which is subject to the following copyright notice:
207
+
208
+ MIT License
209
+
210
+ Copyright (c) 2023 Columbia Artificial Intelligence and Robotics Lab
211
+
212
+ Permission is hereby granted, free of charge, to any person obtaining a copy
213
+ of this software and associated documentation files (the "Software"), to deal
214
+ in the Software without restriction, including without limitation the rights
215
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
216
+ copies of the Software, and to permit persons to whom the Software is
217
+ furnished to do so, subject to the following conditions:
218
+
219
+ The above copyright notice and this permission notice shall be included in all
220
+ copies or substantial portions of the Software.
221
+
222
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
223
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
224
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
225
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
226
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
227
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
228
+ SOFTWARE.
229
+
230
+
231
+ ## Some of lerobot's code is derived from FOWM, which is subject to the following copyright notice:
232
+
233
+ MIT License
234
+
235
+ Copyright (c) 2023 Yunhai Feng
236
+
237
+ Permission is hereby granted, free of charge, to any person obtaining a copy
238
+ of this software and associated documentation files (the "Software"), to deal
239
+ in the Software without restriction, including without limitation the rights
240
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
241
+ copies of the Software, and to permit persons to whom the Software is
242
+ furnished to do so, subject to the following conditions:
243
+
244
+ The above copyright notice and this permission notice shall be included in all
245
+ copies or substantial portions of the Software.
246
+
247
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
248
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
249
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
250
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
251
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
252
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
253
+ SOFTWARE.
254
+
255
+
256
+ ## Some of lerobot's code is derived from simxarm, which is subject to the following copyright notice:
257
+
258
+ MIT License
259
+
260
+ Copyright (c) 2023 Nicklas Hansen & Yanjie Ze
261
+
262
+ Permission is hereby granted, free of charge, to any person obtaining a copy
263
+ of this software and associated documentation files (the "Software"), to deal
264
+ in the Software without restriction, including without limitation the rights
265
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
266
+ copies of the Software, and to permit persons to whom the Software is
267
+ furnished to do so, subject to the following conditions:
268
+
269
+ The above copyright notice and this permission notice shall be included in all
270
+ copies or substantial portions of the Software.
271
+
272
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
273
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
274
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
275
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
276
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
277
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
278
+ SOFTWARE.
279
+
280
+
281
+ ## Some of lerobot's code is derived from ALOHA, which is subject to the following copyright notice:
282
+
283
+ MIT License
284
+
285
+ Copyright (c) 2023 Tony Z. Zhao
286
+
287
+ Permission is hereby granted, free of charge, to any person obtaining a copy
288
+ of this software and associated documentation files (the "Software"), to deal
289
+ in the Software without restriction, including without limitation the rights
290
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
291
+ copies of the Software, and to permit persons to whom the Software is
292
+ furnished to do so, subject to the following conditions:
293
+
294
+ The above copyright notice and this permission notice shall be included in all
295
+ copies or substantial portions of the Software.
296
+
297
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
298
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
299
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
300
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
301
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
302
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
303
+ SOFTWARE.
304
+
305
+ ## Some of lerobot's code is derived from DETR, which is subject to the following copyright notice:
306
+
307
+ Apache License
308
+ Version 2.0, January 2004
309
+ http://www.apache.org/licenses/
310
+
311
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
312
+
313
+ 1. Definitions.
314
+
315
+ "License" shall mean the terms and conditions for use, reproduction,
316
+ and distribution as defined by Sections 1 through 9 of this document.
317
+
318
+ "Licensor" shall mean the copyright owner or entity authorized by
319
+ the copyright owner that is granting the License.
320
+
321
+ "Legal Entity" shall mean the union of the acting entity and all
322
+ other entities that control, are controlled by, or are under common
323
+ control with that entity. For the purposes of this definition,
324
+ "control" means (i) the power, direct or indirect, to cause the
325
+ direction or management of such entity, whether by contract or
326
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
327
+ outstanding shares, or (iii) beneficial ownership of such entity.
328
+
329
+ "You" (or "Your") shall mean an individual or Legal Entity
330
+ exercising permissions granted by this License.
331
+
332
+ "Source" form shall mean the preferred form for making modifications,
333
+ including but not limited to software source code, documentation
334
+ source, and configuration files.
335
+
336
+ "Object" form shall mean any form resulting from mechanical
337
+ transformation or translation of a Source form, including but
338
+ not limited to compiled object code, generated documentation,
339
+ and conversions to other media types.
340
+
341
+ "Work" shall mean the work of authorship, whether in Source or
342
+ Object form, made available under the License, as indicated by a
343
+ copyright notice that is included in or attached to the work
344
+ (an example is provided in the Appendix below).
345
+
346
+ "Derivative Works" shall mean any work, whether in Source or Object
347
+ form, that is based on (or derived from) the Work and for which the
348
+ editorial revisions, annotations, elaborations, or other modifications
349
+ represent, as a whole, an original work of authorship. For the purposes
350
+ of this License, Derivative Works shall not include works that remain
351
+ separable from, or merely link (or bind by name) to the interfaces of,
352
+ the Work and Derivative Works thereof.
353
+
354
+ "Contribution" shall mean any work of authorship, including
355
+ the original version of the Work and any modifications or additions
356
+ to that Work or Derivative Works thereof, that is intentionally
357
+ submitted to Licensor for inclusion in the Work by the copyright owner
358
+ or by an individual or Legal Entity authorized to submit on behalf of
359
+ the copyright owner. For the purposes of this definition, "submitted"
360
+ means any form of electronic, verbal, or written communication sent
361
+ to the Licensor or its representatives, including but not limited to
362
+ communication on electronic mailing lists, source code control systems,
363
+ and issue tracking systems that are managed by, or on behalf of, the
364
+ Licensor for the purpose of discussing and improving the Work, but
365
+ excluding communication that is conspicuously marked or otherwise
366
+ designated in writing by the copyright owner as "Not a Contribution."
367
+
368
+ "Contributor" shall mean Licensor and any individual or Legal Entity
369
+ on behalf of whom a Contribution has been received by Licensor and
370
+ subsequently incorporated within the Work.
371
+
372
+ 2. Grant of Copyright License. Subject to the terms and conditions of
373
+ this License, each Contributor hereby grants to You a perpetual,
374
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
375
+ copyright license to reproduce, prepare Derivative Works of,
376
+ publicly display, publicly perform, sublicense, and distribute the
377
+ Work and such Derivative Works in Source or Object form.
378
+
379
+ 3. Grant of Patent License. Subject to the terms and conditions of
380
+ this License, each Contributor hereby grants to You a perpetual,
381
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
382
+ (except as stated in this section) patent license to make, have made,
383
+ use, offer to sell, sell, import, and otherwise transfer the Work,
384
+ where such license applies only to those patent claims licensable
385
+ by such Contributor that are necessarily infringed by their
386
+ Contribution(s) alone or by combination of their Contribution(s)
387
+ with the Work to which such Contribution(s) was submitted. If You
388
+ institute patent litigation against any entity (including a
389
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
390
+ or a Contribution incorporated within the Work constitutes direct
391
+ or contributory patent infringement, then any patent licenses
392
+ granted to You under this License for that Work shall terminate
393
+ as of the date such litigation is filed.
394
+
395
+ 4. Redistribution. You may reproduce and distribute copies of the
396
+ Work or Derivative Works thereof in any medium, with or without
397
+ modifications, and in Source or Object form, provided that You
398
+ meet the following conditions:
399
+
400
+ (a) You must give any other recipients of the Work or
401
+ Derivative Works a copy of this License; and
402
+
403
+ (b) You must cause any modified files to carry prominent notices
404
+ stating that You changed the files; and
405
+
406
+ (c) You must retain, in the Source form of any Derivative Works
407
+ that You distribute, all copyright, patent, trademark, and
408
+ attribution notices from the Source form of the Work,
409
+ excluding those notices that do not pertain to any part of
410
+ the Derivative Works; and
411
+
412
+ (d) If the Work includes a "NOTICE" text file as part of its
413
+ distribution, then any Derivative Works that You distribute must
414
+ include a readable copy of the attribution notices contained
415
+ within such NOTICE file, excluding those notices that do not
416
+ pertain to any part of the Derivative Works, in at least one
417
+ of the following places: within a NOTICE text file distributed
418
+ as part of the Derivative Works; within the Source form or
419
+ documentation, if provided along with the Derivative Works; or,
420
+ within a display generated by the Derivative Works, if and
421
+ wherever such third-party notices normally appear. The contents
422
+ of the NOTICE file are for informational purposes only and
423
+ do not modify the License. You may add Your own attribution
424
+ notices within Derivative Works that You distribute, alongside
425
+ or as an addendum to the NOTICE text from the Work, provided
426
+ that such additional attribution notices cannot be construed
427
+ as modifying the License.
428
+
429
+ You may add Your own copyright statement to Your modifications and
430
+ may provide additional or different license terms and conditions
431
+ for use, reproduction, or distribution of Your modifications, or
432
+ for any such Derivative Works as a whole, provided Your use,
433
+ reproduction, and distribution of the Work otherwise complies with
434
+ the conditions stated in this License.
435
+
436
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
437
+ any Contribution intentionally submitted for inclusion in the Work
438
+ by You to the Licensor shall be under the terms and conditions of
439
+ this License, without any additional terms or conditions.
440
+ Notwithstanding the above, nothing herein shall supersede or modify
441
+ the terms of any separate license agreement you may have executed
442
+ with Licensor regarding such Contributions.
443
+
444
+ 6. Trademarks. This License does not grant permission to use the trade
445
+ names, trademarks, service marks, or product names of the Licensor,
446
+ except as required for reasonable and customary use in describing the
447
+ origin of the Work and reproducing the content of the NOTICE file.
448
+
449
+ 7. Disclaimer of Warranty. Unless required by applicable law or
450
+ agreed to in writing, Licensor provides the Work (and each
451
+ Contributor provides its Contributions) on an "AS IS" BASIS,
452
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
453
+ implied, including, without limitation, any warranties or conditions
454
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
455
+ PARTICULAR PURPOSE. You are solely responsible for determining the
456
+ appropriateness of using or redistributing the Work and assume any
457
+ risks associated with Your exercise of permissions under this License.
458
+
459
+ 8. Limitation of Liability. In no event and under no legal theory,
460
+ whether in tort (including negligence), contract, or otherwise,
461
+ unless required by applicable law (such as deliberate and grossly
462
+ negligent acts) or agreed to in writing, shall any Contributor be
463
+ liable to You for damages, including any direct, indirect, special,
464
+ incidental, or consequential damages of any character arising as a
465
+ result of this License or out of the use or inability to use the
466
+ Work (including but not limited to damages for loss of goodwill,
467
+ work stoppage, computer failure or malfunction, or any and all
468
+ other commercial damages or losses), even if such Contributor
469
+ has been advised of the possibility of such damages.
470
+
471
+ 9. Accepting Warranty or Additional Liability. While redistributing
472
+ the Work or Derivative Works thereof, You may choose to offer,
473
+ and charge a fee for, acceptance of support, warranty, indemnity,
474
+ or other liability obligations and/or rights consistent with this
475
+ License. However, in accepting such obligations, You may act only
476
+ on Your own behalf and on Your sole responsibility, not on behalf
477
+ of any other Contributor, and only if You agree to indemnify,
478
+ defend, and hold each Contributor harmless for any liability
479
+ incurred by, or claims asserted against, such Contributor by reason
480
+ of your accepting any such warranty or additional liability.
481
+
482
+ END OF TERMS AND CONDITIONS
483
+
484
+ APPENDIX: How to apply the Apache License to your work.
485
+
486
+ To apply the Apache License to your work, attach the following
487
+ boilerplate notice, with the fields enclosed by brackets "[]"
488
+ replaced with your own identifying information. (Don't include
489
+ the brackets!) The text should be enclosed in the appropriate
490
+ comment syntax for the file format. We also recommend that a
491
+ file or class name and description of purpose be included on the
492
+ same "printed page" as the copyright notice for easier
493
+ identification within third-party archives.
494
+
495
+ Copyright 2020 - present, Facebook, Inc
496
+
497
+ Licensed under the Apache License, Version 2.0 (the "License");
498
+ you may not use this file except in compliance with the License.
499
+ You may obtain a copy of the License at
500
+
501
+ http://www.apache.org/licenses/LICENSE-2.0
502
+
503
+ Unless required by applicable law or agreed to in writing, software
504
+ distributed under the License is distributed on an "AS IS" BASIS,
505
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
506
+ See the License for the specific language governing permissions and
507
+ limitations under the License.
MANIFEST.in ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ include src/lerobot/templates/lerobot_modelcard_template.md
2
+ include src/lerobot/datasets/card_template.md
Makefile ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ .PHONY: tests
16
+
17
+ PYTHON_PATH := $(shell which python)
18
+
19
+ # If uv is installed and a virtual environment exists, use it
20
+ UV_CHECK := $(shell command -v uv)
21
+ ifneq ($(UV_CHECK),)
22
+ PYTHON_PATH := $(shell .venv/bin/python)
23
+ endif
24
+
25
+ export PATH := $(dir $(PYTHON_PATH)):$(PATH)
26
+
27
+ DEVICE ?= cpu
28
+
29
+ build-user:
30
+ docker build -f docker/Dockerfile.user -t lerobot-user .
31
+
32
+ build-internal:
33
+ docker build -f docker/Dockerfile.internal -t lerobot-internal .
34
+
35
+ test-end-to-end:
36
+ ${MAKE} DEVICE=$(DEVICE) test-act-ete-train
37
+ ${MAKE} DEVICE=$(DEVICE) test-act-ete-train-resume
38
+ ${MAKE} DEVICE=$(DEVICE) test-act-ete-eval
39
+ ${MAKE} DEVICE=$(DEVICE) test-diffusion-ete-train
40
+ ${MAKE} DEVICE=$(DEVICE) test-diffusion-ete-eval
41
+ ${MAKE} DEVICE=$(DEVICE) test-tdmpc-ete-train
42
+ ${MAKE} DEVICE=$(DEVICE) test-tdmpc-ete-eval
43
+ ${MAKE} DEVICE=$(DEVICE) test-smolvla-ete-train
44
+ ${MAKE} DEVICE=$(DEVICE) test-smolvla-ete-eval
45
+
46
+ test-act-ete-train:
47
+ lerobot-train \
48
+ --policy.type=act \
49
+ --policy.dim_model=64 \
50
+ --policy.n_action_steps=20 \
51
+ --policy.chunk_size=20 \
52
+ --policy.device=$(DEVICE) \
53
+ --policy.push_to_hub=false \
54
+ --env.type=aloha \
55
+ --env.episode_length=5 \
56
+ --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
57
+ --dataset.image_transforms.enable=true \
58
+ --dataset.episodes="[0]" \
59
+ --batch_size=2 \
60
+ --steps=4 \
61
+ --eval_freq=2 \
62
+ --eval.n_episodes=1 \
63
+ --eval.batch_size=1 \
64
+ --save_freq=2 \
65
+ --save_checkpoint=true \
66
+ --log_freq=1 \
67
+ --wandb.enable=false \
68
+ --output_dir=tests/outputs/act/
69
+
70
+ test-act-ete-train-resume:
71
+ lerobot-train \
72
+ --config_path=tests/outputs/act/checkpoints/000002/pretrained_model/train_config.json \
73
+ --resume=true
74
+
75
+ test-act-ete-eval:
76
+ lerobot-eval \
77
+ --policy.path=tests/outputs/act/checkpoints/000004/pretrained_model \
78
+ --policy.device=$(DEVICE) \
79
+ --env.type=aloha \
80
+ --env.episode_length=5 \
81
+ --eval.n_episodes=1 \
82
+ --eval.batch_size=1
83
+
84
+ test-diffusion-ete-train:
85
+ lerobot-train \
86
+ --policy.type=diffusion \
87
+ --policy.down_dims='[64,128,256]' \
88
+ --policy.diffusion_step_embed_dim=32 \
89
+ --policy.num_inference_steps=10 \
90
+ --policy.device=$(DEVICE) \
91
+ --policy.push_to_hub=false \
92
+ --env.type=pusht \
93
+ --env.episode_length=5 \
94
+ --dataset.repo_id=lerobot/pusht \
95
+ --dataset.image_transforms.enable=true \
96
+ --dataset.episodes="[0]" \
97
+ --batch_size=2 \
98
+ --steps=2 \
99
+ --eval_freq=2 \
100
+ --eval.n_episodes=1 \
101
+ --eval.batch_size=1 \
102
+ --save_checkpoint=true \
103
+ --save_freq=2 \
104
+ --log_freq=1 \
105
+ --wandb.enable=false \
106
+ --output_dir=tests/outputs/diffusion/
107
+
108
+ test-diffusion-ete-eval:
109
+ lerobot-eval \
110
+ --policy.path=tests/outputs/diffusion/checkpoints/000002/pretrained_model \
111
+ --policy.device=$(DEVICE) \
112
+ --env.type=pusht \
113
+ --env.episode_length=5 \
114
+ --eval.n_episodes=1 \
115
+ --eval.batch_size=1
116
+
117
+ test-tdmpc-ete-train:
118
+ lerobot-train \
119
+ --policy.type=tdmpc \
120
+ --policy.device=$(DEVICE) \
121
+ --policy.push_to_hub=false \
122
+ --env.type=pusht \
123
+ --env.episode_length=5 \
124
+ --dataset.repo_id=lerobot/pusht_image \
125
+ --dataset.image_transforms.enable=true \
126
+ --dataset.episodes="[0]" \
127
+ --batch_size=2 \
128
+ --steps=2 \
129
+ --eval_freq=2 \
130
+ --eval.n_episodes=1 \
131
+ --eval.batch_size=1 \
132
+ --save_checkpoint=true \
133
+ --save_freq=2 \
134
+ --log_freq=1 \
135
+ --wandb.enable=false \
136
+ --output_dir=tests/outputs/tdmpc/
137
+
138
+ test-tdmpc-ete-eval:
139
+ lerobot-eval \
140
+ --policy.path=tests/outputs/tdmpc/checkpoints/000002/pretrained_model \
141
+ --policy.device=$(DEVICE) \
142
+ --env.type=pusht \
143
+ --env.episode_length=5 \
144
+ --env.observation_height=96 \
145
+ --env.observation_width=96 \
146
+ --eval.n_episodes=1 \
147
+ --eval.batch_size=1
148
+
149
+
150
+ test-smolvla-ete-train:
151
+ lerobot-train \
152
+ --policy.type=smolvla \
153
+ --policy.n_action_steps=20 \
154
+ --policy.chunk_size=20 \
155
+ --policy.device=$(DEVICE) \
156
+ --policy.push_to_hub=false \
157
+ --env.type=aloha \
158
+ --env.episode_length=5 \
159
+ --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
160
+ --dataset.image_transforms.enable=true \
161
+ --dataset.episodes="[0]" \
162
+ --batch_size=2 \
163
+ --steps=4 \
164
+ --eval_freq=2 \
165
+ --eval.n_episodes=1 \
166
+ --eval.batch_size=1 \
167
+ --save_freq=2 \
168
+ --save_checkpoint=true \
169
+ --log_freq=1 \
170
+ --wandb.enable=false \
171
+ --output_dir=tests/outputs/smolvla/
172
+
173
+ test-smolvla-ete-eval:
174
+ lerobot-eval \
175
+ --policy.path=tests/outputs/smolvla/checkpoints/000004/pretrained_model \
176
+ --policy.device=$(DEVICE) \
177
+ --env.type=aloha \
178
+ --env.episode_length=5 \
179
+ --eval.n_episodes=1 \
180
+ --eval.batch_size=1
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <img alt="LeRobot, Hugging Face Robotics Library" src="./media/readme/lerobot-logo-thumbnail.png" width="100%">
3
+ </p>
4
+
5
+ <div align="center">
6
+
7
+ [![Tests](https://github.com/huggingface/lerobot/actions/workflows/nightly.yml/badge.svg?branch=main)](https://github.com/huggingface/lerobot/actions/workflows/nightly.yml?query=branch%3Amain)
8
+ [![Python versions](https://img.shields.io/pypi/pyversions/lerobot)](https://www.python.org/downloads/)
9
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/huggingface/lerobot/blob/main/LICENSE)
10
+ [![Status](https://img.shields.io/pypi/status/lerobot)](https://pypi.org/project/lerobot/)
11
+ [![Version](https://img.shields.io/pypi/v/lerobot)](https://pypi.org/project/lerobot/)
12
+ [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.1-ff69b4.svg)](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
13
+ [![Discord](https://img.shields.io/badge/Discord-Join_Us-5865F2?style=flat&logo=discord&logoColor=white)](https://discord.gg/q8Dzzpym3f)
14
+
15
+ </div>
16
+
17
+ **LeRobot** aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry so that everyone can contribute to and benefit from shared datasets and pretrained models.
18
+
19
+ 🤗 A hardware-agnostic, Python-native interface that standardizes control across diverse platforms, from low-cost arms (SO-100) to humanoids.
20
+
21
+ 🤗 A standardized, scalable LeRobotDataset format (Parquet + MP4 or images) hosted on the Hugging Face Hub, enabling efficient storage, streaming and visualization of massive robotic datasets.
22
+
23
+ 🤗 State-of-the-art policies that have been shown to transfer to the real-world ready for training and deployment.
24
+
25
+ 🤗 Comprehensive support for the open-source ecosystem to democratize physical AI.
26
+
27
+ ## Quick Start
28
+
29
+ LeRobot can be installed directly from PyPI.
30
+
31
+ ```bash
32
+ pip install lerobot
33
+ lerobot-info
34
+ ```
35
+
36
+ > [!IMPORTANT]
37
+ > For detailed installation guide, please see the [Installation Documentation](https://huggingface.co/docs/lerobot/installation).
38
+
39
+ ## Robots & Control
40
+
41
+ <div align="center">
42
+ <img src="./media/readme/robots_control_video.webp" width="640px" alt="Reachy 2 Demo">
43
+ </div>
44
+
45
+ LeRobot provides a unified `Robot` class interface that decouples control logic from hardware specifics. It supports a wide range of robots and teleoperation devices.
46
+
47
+ ```python
48
+ from lerobot.robots.myrobot import MyRobot
49
+
50
+ # Connect to a robot
51
+ robot = MyRobot(config=...)
52
+ robot.connect()
53
+
54
+ # Read observation and send action
55
+ obs = robot.get_observation()
56
+ action = model.select_action(obs)
57
+ robot.send_action(action)
58
+ ```
59
+
60
+ **Supported Hardware:** SO100, LeKiwi, Koch, HopeJR, OMX, EarthRover, Reachy2, Gamepads, Keyboards, Phones, OpenARM, Unitree G1.
61
+
62
+ While these devices are natively integrated into the LeRobot codebase, the library is designed to be extensible. You can easily implement the Robot interface to utilize LeRobot's data collection, training, and visualization tools for your own custom robot.
63
+
64
+ For detailed hardware setup guides, see the [Hardware Documentation](https://huggingface.co/docs/lerobot/integrate_hardware).
65
+
66
+ ## LeRobot Dataset
67
+
68
+ To solve the data fragmentation problem in robotics, we utilize the **LeRobotDataset** format.
69
+
70
+ - **Structure:** Synchronized MP4 videos (or images) for vision and Parquet files for state/action data.
71
+ - **HF Hub Integration:** Explore thousands of robotics datasets on the [Hugging Face Hub](https://huggingface.co/lerobot).
72
+ - **Tools:** Seamlessly delete episodes, split by indices/fractions, add/remove features, and merge multiple datasets.
73
+
74
+ ```python
75
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
76
+
77
+ # Load a dataset from the Hub
78
+ dataset = LeRobotDataset("lerobot/aloha_mobile_cabinet")
79
+
80
+ # Access data (automatically handles video decoding)
81
+ episode_index=0
82
+ print(f"{dataset[episode_index]['action'].shape=}\n")
83
+ ```
84
+
85
+ Learn more about it in the [LeRobotDataset Documentation](https://huggingface.co/docs/lerobot/lerobot-dataset-v3)
86
+
87
+ ## SoTA Models
88
+
89
+ LeRobot implements state-of-the-art policies in pure PyTorch, covering Imitation Learning, Reinforcement Learning, and Vision-Language-Action (VLA) models, with more coming soon. It also provides you with the tools to instrument and inspect your training process.
90
+
91
+ <p align="center">
92
+ <img alt="Gr00t Architecture" src="./media/readme/VLA_architecture.jpg" width="640px">
93
+ </p>
94
+
95
+ Training a policy is as simple as running a script configuration:
96
+
97
+ ```bash
98
+ lerobot-train \
99
+ --policy=act \
100
+ --dataset.repo_id=lerobot/aloha_mobile_cabinet
101
+ ```
102
+
103
+ | Category | Models |
104
+ | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
105
+ | **Imitation Learning** | [ACT](./docs/source/policy_act_README.md), [Diffusion](./docs/source/policy_diffusion_README.md), [VQ-BeT](./docs/source/policy_vqbet_README.md) |
106
+ | **Reinforcement Learning** | [HIL-SERL](./docs/source/hilserl.mdx), [TDMPC](./docs/source/policy_tdmpc_README.md) & QC-FQL (coming soon) |
107
+ | **VLAs Models** | [Pi0Fast](./docs/source/pi0fast.mdx), [Pi0.5](./docs/source/pi05.mdx), [GR00T N1.5](./docs/source/policy_groot_README.md), [SmolVLA](./docs/source/policy_smolvla_README.md), [XVLA](./docs/source/xvla.mdx) |
108
+
109
+ Similarly to the hardware, you can easily implement your own policy & leverage LeRobot's data collection, training, and visualization tools, and share your model to the HF Hub
110
+
111
+ For detailed policy setup guides, see the [Policy Documentation](https://huggingface.co/docs/lerobot/bring_your_own_policies).
112
+
113
+ ## Inference & Evaluation
114
+
115
+ Evaluate your policies in simulation or on real hardware using the unified evaluation script. LeRobot supports standard benchmarks like **LIBERO**, **MetaWorld** and more to come.
116
+
117
+ ```bash
118
+ # Evaluate a policy on the LIBERO benchmark
119
+ lerobot-eval \
120
+ --policy.path=lerobot/pi0_libero_finetuned \
121
+ --env.type=libero \
122
+ --env.task=libero_object \
123
+ --eval.n_episodes=10
124
+ ```
125
+
126
+ Learn how to implement your own simulation environment or benchmark and distribute it from the HF Hub by following the [EnvHub Documentation](https://huggingface.co/docs/lerobot/envhub)
127
+
128
+ ## Resources
129
+
130
+ - **[Documentation](https://huggingface.co/docs/lerobot/index):** The complete guide to tutorials & API.
131
+ - **[Chinese Tutorials: LeRobot+SO-ARM101中文教程-同济子豪兄](https://zihao-ai.feishu.cn/wiki/space/7589642043471924447)** Detailed doc for assembling, teleoperate, dataset, train, deploy. Verified by Seed Studio and 5 global hackathon players.
132
+ - **[Discord](https://discord.gg/q8Dzzpym3f):** Join the `LeRobot` server to discuss with the community.
133
+ - **[X](https://x.com/LeRobotHF):** Follow us on X to stay up-to-date with the latest developments.
134
+ - **[Robot Learning Tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial):** A free, hands-on course to learn robot learning using LeRobot.
135
+
136
+ ## Citation
137
+
138
+ If you use LeRobot in your research, please cite:
139
+
140
+ ```bibtex
141
+ @misc{cadene2024lerobot,
142
+ author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascal, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
143
+ title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
144
+ howpublished = "\url{https://github.com/huggingface/lerobot}",
145
+ year = {2024}
146
+ }
147
+ ```
148
+
149
+ ## Contribute
150
+
151
+ We welcome contributions from everyone in the community! To get started, please read our [CONTRIBUTING.md](./CONTRIBUTING.md) guide. Whether you're adding a new feature, improving documentation, or fixing a bug, your help and feedback are invaluable. We're incredibly excited about the future of open-source robotics and can't wait to work with you on what's next—thank you for your support!
152
+
153
+ <p align="center">
154
+ <img alt="SO101 Video" src="./media/readme/so100_video.webp" width="640px">
155
+ </p>
156
+
157
+ <div align="center">
158
+ <sub>Built by the <a href="https://huggingface.co/lerobot">LeRobot</a> team at <a href="https://huggingface.co">Hugging Face</a> with ❤️</sub>
159
+ </div>
SECURITY.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Security Policy
2
+
3
+ ## Project Status & Philosophy
4
+
5
+ `lerobot` has so far been primarily a research and prototyping tool, which is why deployment security hasn’t been a strong focus until now. As `lerobot` continues to be adopted and deployed in production, we are paying much closer attention to these kinds of issues.
6
+
7
+ Fortunately, being an open-source project, the community can also help by reporting and fixing vulnerabilities. We appreciate your efforts to responsibly disclose your findings and will make every effort to acknowledge your contributions.
8
+
9
+ ## Reporting a Vulnerability
10
+
11
+ To report a security issue, please use the GitHub Security Advisory ["Report a Vulnerability"](https://github.com/huggingface/lerobot/security/advisories/new) tab.
12
+
13
+ The `lerobot` team will send a response indicating the next steps in handling your report. After the initial reply to your report, the security team will keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance.
14
+
15
+ #### Hugging Face Security Team
16
+
17
+ Since this project is part of the Hugging Face ecosystem, feel free to submit vulnerability reports directly to: **[security@huggingface.co](mailto:security@huggingface.co)**. Someone from the HF security team will review the report and recommend next steps.
18
+
19
+ #### Open Source Disclosures
20
+
21
+ If reporting a vulnerability specific to the open-source codebase (and not the underlying Hub infrastructure), you may also use [Huntr](https://huntr.com), a vulnerability disclosure program for open source software.
22
+
23
+ ## Supported Versions
24
+
25
+ Currently, we treat `lerobot` as a rolling release. We prioritize security updates for the latest available version (`main` branch).
26
+
27
+ | Version | Supported |
28
+ | -------- | --------- |
29
+ | Latest | ✅ |
30
+ | < Latest | ❌ |
31
+
32
+ ## Secure Usage Guidelines
33
+
34
+ `lerobot` is tightly coupled to the Hugging Face Hub for sharing data and pretrained policies. When downloading artifacts uploaded by others, you expose yourself to risks. Please read below for recommendations to keep your runtime and robot environment safe.
35
+
36
+ ### Remote Artefacts (Weights & Policies)
37
+
38
+ Models and policies uploaded to the Hugging Face Hub come in different formats. We heavily recommend uploading and downloading models in the [`safetensors`](https://github.com/huggingface/safetensors) format.
39
+
40
+ `safetensors` was developed specifically to prevent arbitrary code execution on your system, which is critical when running software on physical hardware/robots.
41
+
42
+ To avoid loading models from unsafe formats (e.g., `pickle`), you should ensure you are prioritizing `safetensors` files.
43
+
44
+ ### Remote Code
45
+
46
+ Some models or environments on the Hub may require `trust_remote_code=True` to run custom architecture code.
47
+
48
+ Please **always** verify the content of the modeling files when using this argument. We recommend setting a specific `revision` (commit hash) when loading remote code to ensure you protect yourself from unverified updates to the repository.
benchmarks/video/README.md ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Video benchmark
2
+
3
+ ## Questions
4
+
5
+ What is the optimal trade-off between:
6
+
7
+ - maximizing loading time with random access,
8
+ - minimizing memory space on disk,
9
+ - maximizing success rate of policies,
10
+ - compatibility across devices/platforms for decoding videos (e.g. video players, web browsers).
11
+
12
+ How to encode videos?
13
+
14
+ - Which video codec (`-vcodec`) to use? h264, h265, AV1?
15
+ - What pixel format to use (`-pix_fmt`)? `yuv444p` or `yuv420p`?
16
+ - How much compression (`-crf`)? No compression with `0`, intermediate compression with `25` or extreme with `50+`?
17
+ - Which frequency to chose for key frames (`-g`)? A key frame every `10` frames?
18
+
19
+ How to decode videos?
20
+
21
+ - Which `decoder`? `torchvision`, `torchaudio`, `ffmpegio`, `decord`, or `nvc`?
22
+ - What scenarios to use for the requesting timestamps during benchmark? (`timestamps_mode`)
23
+
24
+ ## Variables
25
+
26
+ **Image content & size**
27
+ We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an apartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution).
28
+ For these reasons, we run this benchmark on four representative datasets:
29
+
30
+ - `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera.
31
+ - `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera.
32
+ - `aliberts/paris_street`: (720 x 1280 pixels) real-world outdoor, moving camera.
33
+ - `aliberts/kitchen`: (1080 x 1920 pixels) real-world indoor, fixed camera.
34
+
35
+ Note: The datasets used for this benchmark need to be image datasets, not video datasets.
36
+
37
+ **Data augmentations**
38
+ We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robust (e.g. robust to color changes, compression, etc.).
39
+
40
+ ### Encoding parameters
41
+
42
+ | parameter | values |
43
+ | ----------- | ------------------------------------------------------------ |
44
+ | **vcodec** | `libx264`, `libx265`, `libsvtav1` |
45
+ | **pix_fmt** | `yuv444p`, `yuv420p` |
46
+ | **g** | `1`, `2`, `3`, `4`, `5`, `6`, `10`, `15`, `20`, `40`, `None` |
47
+ | **crf** | `0`, `5`, `10`, `15`, `20`, `25`, `30`, `40`, `50`, `None` |
48
+
49
+ Note that `crf` value might be interpreted differently by various video codecs. In other words, the same value used with one codec doesn't necessarily translate into the same compression level with another codec. In fact, the default value (`None`) isn't the same amongst the different video codecs. Importantly, it is also the case for many other ffmpeg arguments like `g` which specifies the frequency of the key frames.
50
+
51
+ For a comprehensive list and documentation of these parameters, see the ffmpeg documentation depending on the video codec used:
52
+
53
+ - h264: https://trac.ffmpeg.org/wiki/Encode/H.264
54
+ - h265: https://trac.ffmpeg.org/wiki/Encode/H.265
55
+ - AV1: https://trac.ffmpeg.org/wiki/Encode/AV1
56
+
57
+ ### Decoding parameters
58
+
59
+ **Decoder**
60
+ We tested two video decoding backends from torchvision:
61
+
62
+ - `pyav`
63
+ - `video_reader` (requires to build torchvision from source)
64
+
65
+ **Requested timestamps**
66
+ Given the way video decoding works, once a keyframe has been loaded, the decoding of subsequent frames is fast.
67
+ This of course is affected by the `-g` parameter during encoding, which specifies the frequency of the keyframes. Given our typical use cases in robotics policies which might request a few timestamps in different random places, we want to replicate these use cases with the following scenarios:
68
+
69
+ - `1_frame`: 1 frame,
70
+ - `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
71
+ - `6_frames`: 6 consecutive frames (e.g. `[t + i / fps for i in range(6)]`)
72
+
73
+ Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`.
74
+
75
+ Additionally, because some policies might request single timestamps that are a few frames apart, we also have the following scenario:
76
+
77
+ - `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`),
78
+
79
+ However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded.
80
+
81
+ ## Metrics
82
+
83
+ **Data compression ratio (lower is better)**
84
+ `video_images_size_ratio` is the ratio of the memory space on disk taken by the encoded video over the memory space taken by the original images. For instance, `video_images_size_ratio=25%` means that the video takes 4 times less memory space on disk compared to the original images.
85
+
86
+ **Loading time ratio (lower is better)**
87
+ `video_images_load_time_ratio` is the ratio of the time it takes to decode frames from the video at a given timestamps over the time it takes to load the exact same original images. Lower is better. For instance, `video_images_load_time_ratio=200%` means that decoding from video is 2 times slower than loading the original images.
88
+
89
+ **Average Mean Square Error (lower is better)**
90
+ `avg_mse` is the average mean square error between each decoded frame and its corresponding original image over all requested timestamps, and also divided by the number of pixels in the image to be comparable when switching to different image sizes.
91
+
92
+ **Average Peak Signal to Noise Ratio (higher is better)**
93
+ `avg_psnr` measures the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Higher PSNR indicates better quality.
94
+
95
+ **Average Structural Similarity Index Measure (higher is better)**
96
+ `avg_ssim` evaluates the perceived quality of images by comparing luminance, contrast, and structure. SSIM values range from -1 to 1, where 1 indicates perfect similarity.
97
+
98
+ One aspect that can't be measured here with those metrics is the compatibility of the encoding across platforms, in particular on web browser, for visualization purposes.
99
+ h264, h265 and AV1 are all commonly used codecs and should not pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility:
100
+
101
+ - `yuv420p` is more widely supported across various platforms, including web browsers.
102
+ - `yuv444p` offers higher color fidelity but might not be supported as broadly.
103
+
104
+ <!-- **Loss of a pretrained policy (higher is better)** (not available)
105
+ `loss_pretrained` is the result of evaluating with the selected encoding/decoding settings a policy pretrained on original images. It is easier to understand than `avg_l2_error`.
106
+
107
+ **Success rate after retraining (higher is better)** (not available)
108
+ `success_rate` is the result of training and evaluating a policy with the selected encoding/decoding settings. It is the most difficult metric to get but also the very best. -->
109
+
110
+ ## How the benchmark works
111
+
112
+ The benchmark evaluates both encoding and decoding of video frames on the first episode of each dataset.
113
+
114
+ **Encoding:** for each `vcodec` and `pix_fmt` pair, we use a default value for `g` and `crf` upon which we change a single value (either `g` or `crf`) to one of the specified values (we don't test every combination of those as this would be computationally too heavy).
115
+ This gives a unique set of encoding parameters which is used to encode the episode.
116
+
117
+ **Decoding:** Then, for each of those unique encodings, we iterate through every combination of the decoding parameters `backend` and `timestamps_mode`. For each of them, we record the metrics of a number of samples (given by `--num-samples`). This is parallelized for efficiency and the number of processes can be controlled with `--num-workers`. Ideally, it's best to have a `--num-samples` that is divisible by `--num-workers`.
118
+
119
+ Intermediate results saved for each `vcodec` and `pix_fmt` combination in csv tables.
120
+ These are then all concatenated to a single table ready for analysis.
121
+
122
+ ## Caveats
123
+
124
+ We tried to measure the most impactful parameters for both encoding and decoding. However, for computational reasons we can't test out every combination.
125
+
126
+ Additional encoding parameters exist that are not included in this benchmark. In particular:
127
+
128
+ - `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1.
129
+ - `-tune` which allows to optimize the encoding for certain aspects (e.g. film quality, fast decoding, etc.).
130
+
131
+ See the documentation mentioned above for more detailed info on these settings and for a more comprehensive list of other parameters.
132
+
133
+ Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few:
134
+
135
+ - `torchaudio`
136
+ - `ffmpegio`
137
+ - `decord`
138
+ - `nvc`
139
+
140
+ Note as well that since we are mostly interested in the performance at decoding time (also because encoding is done only once before uploading a dataset), we did not measure encoding times nor have any metrics regarding encoding.
141
+ However, besides the necessity to build ffmpeg from source, encoding did not pose any issue and it didn't take a significant amount of time during this benchmark.
142
+
143
+ ## Install
144
+
145
+ Building ffmpeg from source is required to include libx265 and libaom/libsvtav1 (av1) video codecs ([compilation guide](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu)).
146
+
147
+ **Note:** While you still need to build torchvision with a conda-installed `ffmpeg<4.3` to use the `video_reader` decoder (as described in [#220](https://github.com/huggingface/lerobot/pull/220)), you also need another version which is custom-built with all the video codecs for encoding. For the script to then use that version, you can prepend the command above with `PATH="$HOME/bin:$PATH"`, which is where ffmpeg should be built.
148
+
149
+ ## Adding a video decoder
150
+
151
+ Right now, we're only benchmarking the two video decoder available with torchvision: `pyav` and `video_reader`.
152
+ You can easily add a new decoder to benchmark by adding it to this function in the script:
153
+
154
+ ```diff
155
+ def decode_video_frames(
156
+ video_path: str,
157
+ timestamps: list[float],
158
+ tolerance_s: float,
159
+ backend: str,
160
+ ) -> torch.Tensor:
161
+ if backend in ["pyav", "video_reader"]:
162
+ return decode_video_frames_torchvision(
163
+ video_path, timestamps, tolerance_s, backend
164
+ )
165
+ + elif backend == ["your_decoder"]:
166
+ + return your_decoder_function(
167
+ + video_path, timestamps, tolerance_s, backend
168
+ + )
169
+ else:
170
+ raise NotImplementedError(backend)
171
+ ```
172
+
173
+ ## Example
174
+
175
+ For a quick run, you can try these parameters:
176
+
177
+ ```bash
178
+ python benchmark/video/run_video_benchmark.py \
179
+ --output-dir outputs/video_benchmark \
180
+ --repo-ids \
181
+ lerobot/pusht_image \
182
+ aliberts/aloha_mobile_shrimp_image \
183
+ --vcodec libx264 libx265 \
184
+ --pix-fmt yuv444p yuv420p \
185
+ --g 2 20 None \
186
+ --crf 10 40 None \
187
+ --timestamps-modes 1_frame 2_frames \
188
+ --backends pyav video_reader \
189
+ --num-samples 5 \
190
+ --num-workers 5 \
191
+ --save-frames 0
192
+ ```
193
+
194
+ ## Results
195
+
196
+ ### Reproduce
197
+
198
+ We ran the benchmark with the following parameters:
199
+
200
+ ```bash
201
+ # h264 and h265 encodings
202
+ python benchmark/video/run_video_benchmark.py \
203
+ --output-dir outputs/video_benchmark \
204
+ --repo-ids \
205
+ lerobot/pusht_image \
206
+ aliberts/aloha_mobile_shrimp_image \
207
+ aliberts/paris_street \
208
+ aliberts/kitchen \
209
+ --vcodec libx264 libx265 \
210
+ --pix-fmt yuv444p yuv420p \
211
+ --g 1 2 3 4 5 6 10 15 20 40 None \
212
+ --crf 0 5 10 15 20 25 30 40 50 None \
213
+ --timestamps-modes 1_frame 2_frames 6_frames \
214
+ --backends pyav video_reader \
215
+ --num-samples 50 \
216
+ --num-workers 5 \
217
+ --save-frames 1
218
+
219
+ # av1 encoding (only compatible with yuv420p and pyav decoder)
220
+ python benchmark/video/run_video_benchmark.py \
221
+ --output-dir outputs/video_benchmark \
222
+ --repo-ids \
223
+ lerobot/pusht_image \
224
+ aliberts/aloha_mobile_shrimp_image \
225
+ aliberts/paris_street \
226
+ aliberts/kitchen \
227
+ --vcodec libsvtav1 \
228
+ --pix-fmt yuv420p \
229
+ --g 1 2 3 4 5 6 10 15 20 40 None \
230
+ --crf 0 5 10 15 20 25 30 40 50 None \
231
+ --timestamps-modes 1_frame 2_frames 6_frames \
232
+ --backends pyav \
233
+ --num-samples 50 \
234
+ --num-workers 5 \
235
+ --save-frames 1
236
+ ```
237
+
238
+ The full results are available [here](https://docs.google.com/spreadsheets/d/1OYJB43Qu8fC26k_OyoMFgGBBKfQRCi4BIuYitQnq3sw/edit?usp=sharing)
239
+
240
+ ### Parameters selected for LeRobotDataset
241
+
242
+ Considering these results, we chose what we think is the best set of encoding parameter:
243
+
244
+ - vcodec: `libsvtav1`
245
+ - pix-fmt: `yuv420p`
246
+ - g: `2`
247
+ - crf: `30`
248
+
249
+ Since we're using av1 encoding, we're choosing the `pyav` decoder as `video_reader` does not support it (and `pyav` doesn't require a custom build of `torchvision`).
250
+
251
+ ### Summary
252
+
253
+ These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_frames` and `backend=pyav`
254
+
255
+ | video_images_size_ratio | vcodec | pix_fmt | | | |
256
+ | ---------------------------------- | ---------- | ------- | --------- | --------- | --------- |
257
+ | | libx264 | | libx265 | | libsvtav1 |
258
+ | repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
259
+ | lerobot/pusht_image | **16.97%** | 17.58% | 18.57% | 18.86% | 22.06% |
260
+ | aliberts/aloha_mobile_shrimp_image | 2.14% | 2.11% | 1.38% | **1.37%** | 5.59% |
261
+ | aliberts/paris_street | 2.12% | 2.13% | **1.54%** | **1.54%** | 4.43% |
262
+ | aliberts/kitchen | 1.40% | 1.39% | **1.00%** | **1.00%** | 2.52% |
263
+
264
+ | video_images_load_time_ratio | vcodec | pix_fmt | | | |
265
+ | ---------------------------------- | ------- | ------- | -------- | ------- | --------- |
266
+ | | libx264 | | libx265 | | libsvtav1 |
267
+ | repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
268
+ | lerobot/pusht_image | 6.45 | 5.19 | **1.90** | 2.12 | 2.47 |
269
+ | aliberts/aloha_mobile_shrimp_image | 11.80 | 7.92 | 0.71 | 0.85 | **0.48** |
270
+ | aliberts/paris_street | 2.21 | 2.05 | 0.36 | 0.49 | **0.30** |
271
+ | aliberts/kitchen | 1.46 | 1.46 | 0.28 | 0.51 | **0.26** |
272
+
273
+ | | | vcodec | pix_fmt | | | |
274
+ | ---------------------------------- | -------- | -------- | ------------ | -------- | --------- | ------------ |
275
+ | | | libx264 | | libx265 | | libsvtav1 |
276
+ | repo_id | metric | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
277
+ | lerobot/pusht_image | avg_mse | 2.90E-04 | **2.03E-04** | 3.13E-04 | 2.29E-04 | 2.19E-04 |
278
+ | | avg_psnr | 35.44 | 37.07 | 35.49 | **37.30** | 37.20 |
279
+ | | avg_ssim | 98.28% | **98.85%** | 98.31% | 98.84% | 98.72% |
280
+ | aliberts/aloha_mobile_shrimp_image | avg_mse | 2.76E-04 | 2.59E-04 | 3.17E-04 | 3.06E-04 | **1.30E-04** |
281
+ | | avg_psnr | 35.91 | 36.21 | 35.88 | 36.09 | **40.17** |
282
+ | | avg_ssim | 95.19% | 95.18% | 95.00% | 95.05% | **97.73%** |
283
+ | aliberts/paris_street | avg_mse | 6.89E-04 | 6.70E-04 | 4.03E-03 | 4.02E-03 | **3.09E-04** |
284
+ | | avg_psnr | 33.48 | 33.68 | 32.05 | 32.15 | **35.40** |
285
+ | | avg_ssim | 93.76% | 93.75% | 89.46% | 89.46% | **95.46%** |
286
+ | aliberts/kitchen | avg_mse | 2.50E-04 | 2.24E-04 | 4.28E-04 | 4.18E-04 | **1.53E-04** |
287
+ | | avg_psnr | 36.73 | 37.33 | 36.56 | 36.75 | **39.12** |
288
+ | | avg_ssim | 95.47% | 95.58% | 95.52% | 95.53% | **96.82%** |
benchmarks/video/run_video_benchmark.py ADDED
@@ -0,0 +1,488 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """Assess the performance of video decoding in various configurations.
17
+
18
+ This script will benchmark different video encoding and decoding parameters.
19
+ See the provided README.md or run `python benchmark/video/run_video_benchmark.py --help` for usage info.
20
+ """
21
+
22
+ import argparse
23
+ import datetime as dt
24
+ import itertools
25
+ import random
26
+ import shutil
27
+ from collections import OrderedDict
28
+ from concurrent.futures import ThreadPoolExecutor, as_completed
29
+ from pathlib import Path
30
+ from threading import Lock
31
+
32
+ import einops
33
+ import numpy as np
34
+ import pandas as pd
35
+ import PIL
36
+ import torch
37
+ from skimage.metrics import mean_squared_error, peak_signal_noise_ratio, structural_similarity
38
+ from tqdm import tqdm
39
+
40
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
41
+ from lerobot.datasets.video_utils import (
42
+ decode_video_frames,
43
+ encode_video_frames,
44
+ )
45
+ from lerobot.utils.constants import OBS_IMAGE
46
+ from lerobot.utils.utils import TimerManager
47
+
48
+ BASE_ENCODING = OrderedDict(
49
+ [
50
+ ("vcodec", "libx264"),
51
+ ("pix_fmt", "yuv444p"),
52
+ ("g", 2),
53
+ ("crf", None),
54
+ # TODO(aliberts): Add fastdecode
55
+ # ("fastdecode", 0),
56
+ ]
57
+ )
58
+
59
+
60
+ # TODO(rcadene, aliberts): move to `utils.py` folder when we want to refactor
61
+ def parse_int_or_none(value) -> int | None:
62
+ if value.lower() == "none":
63
+ return None
64
+ try:
65
+ return int(value)
66
+ except ValueError as e:
67
+ raise argparse.ArgumentTypeError(f"Invalid int or None: {value}") from e
68
+
69
+
70
+ def check_datasets_formats(repo_ids: list) -> None:
71
+ for repo_id in repo_ids:
72
+ dataset = LeRobotDataset(repo_id)
73
+ if len(dataset.meta.video_keys) > 0:
74
+ raise ValueError(
75
+ f"Use only image dataset for running this benchmark. Video dataset provided: {repo_id}"
76
+ )
77
+
78
+
79
+ def get_directory_size(directory: Path) -> int:
80
+ total_size = 0
81
+ for item in directory.rglob("*"):
82
+ if item.is_file():
83
+ total_size += item.stat().st_size
84
+ return total_size
85
+
86
+
87
+ def load_original_frames(imgs_dir: Path, timestamps: list[float], fps: int) -> torch.Tensor:
88
+ frames = []
89
+ for ts in timestamps:
90
+ idx = int(ts * fps)
91
+ frame = PIL.Image.open(imgs_dir / f"frame-{idx:06d}.png")
92
+ frame = torch.from_numpy(np.array(frame))
93
+ frame = frame.type(torch.float32) / 255
94
+ frame = einops.rearrange(frame, "h w c -> c h w")
95
+ frames.append(frame)
96
+ return torch.stack(frames)
97
+
98
+
99
+ def save_decoded_frames(
100
+ imgs_dir: Path, save_dir: Path, frames: torch.Tensor, timestamps: list[float], fps: int
101
+ ) -> None:
102
+ if save_dir.exists() and len(list(save_dir.glob("frame-*.png"))) == len(timestamps):
103
+ return
104
+
105
+ save_dir.mkdir(parents=True, exist_ok=True)
106
+ for i, ts in enumerate(timestamps):
107
+ idx = int(ts * fps)
108
+ frame_hwc = (frames[i].permute((1, 2, 0)) * 255).type(torch.uint8).cpu().numpy()
109
+ PIL.Image.fromarray(frame_hwc).save(save_dir / f"frame-{idx:06d}_decoded.png")
110
+ shutil.copyfile(imgs_dir / f"frame-{idx:06d}.png", save_dir / f"frame-{idx:06d}_original.png")
111
+
112
+
113
+ def save_first_episode(imgs_dir: Path, dataset: LeRobotDataset) -> None:
114
+ episode_index = 0
115
+ ep_num_images = dataset.meta.episodes["length"][episode_index]
116
+ if imgs_dir.exists() and len(list(imgs_dir.glob("frame-*.png"))) == ep_num_images:
117
+ return
118
+
119
+ imgs_dir.mkdir(parents=True, exist_ok=True)
120
+ hf_dataset = dataset.hf_dataset.with_format(None)
121
+
122
+ # We only save images from the first camera
123
+ img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
124
+ imgs_dataset = hf_dataset.select_columns(img_keys[0])
125
+
126
+ for i, item in enumerate(
127
+ tqdm(imgs_dataset, desc=f"saving {dataset.repo_id} first episode images", leave=False)
128
+ ):
129
+ img = item[img_keys[0]]
130
+ img.save(str(imgs_dir / f"frame-{i:06d}.png"), quality=100)
131
+
132
+ if i >= ep_num_images - 1:
133
+ break
134
+
135
+
136
+ def sample_timestamps(timestamps_mode: str, ep_num_images: int, fps: int) -> list[float]:
137
+ # Start at 5 to allow for 2_frames_4_space and 6_frames
138
+ idx = random.randint(5, ep_num_images - 1)
139
+ match timestamps_mode:
140
+ case "1_frame":
141
+ frame_indexes = [idx]
142
+ case "2_frames":
143
+ frame_indexes = [idx - 1, idx]
144
+ case "2_frames_4_space":
145
+ frame_indexes = [idx - 5, idx]
146
+ case "6_frames":
147
+ frame_indexes = [idx - i for i in range(6)][::-1]
148
+ case _:
149
+ raise ValueError(timestamps_mode)
150
+
151
+ return [idx / fps for idx in frame_indexes]
152
+
153
+
154
+ def benchmark_decoding(
155
+ imgs_dir: Path,
156
+ video_path: Path,
157
+ timestamps_mode: str,
158
+ backend: str,
159
+ ep_num_images: int,
160
+ fps: int,
161
+ num_samples: int = 50,
162
+ num_workers: int = 4,
163
+ save_frames: bool = False,
164
+ ) -> dict:
165
+ def process_sample(sample: int, lock: Lock):
166
+ time_benchmark = TimerManager(log=False)
167
+ timestamps = sample_timestamps(timestamps_mode, ep_num_images, fps)
168
+ num_frames = len(timestamps)
169
+ result = {
170
+ "psnr_values": [],
171
+ "ssim_values": [],
172
+ "mse_values": [],
173
+ }
174
+
175
+ with time_benchmark, lock:
176
+ frames = decode_video_frames(video_path, timestamps=timestamps, tolerance_s=5e-1, backend=backend)
177
+ result["load_time_video_ms"] = (time_benchmark.last * 1000) / num_frames
178
+
179
+ with time_benchmark:
180
+ original_frames = load_original_frames(imgs_dir, timestamps, fps)
181
+ result["load_time_images_ms"] = (time_benchmark.last * 1000) / num_frames
182
+
183
+ frames_np, original_frames_np = frames.numpy(), original_frames.numpy()
184
+ for i in range(num_frames):
185
+ result["mse_values"].append(mean_squared_error(original_frames_np[i], frames_np[i]))
186
+ result["psnr_values"].append(
187
+ peak_signal_noise_ratio(original_frames_np[i], frames_np[i], data_range=1.0)
188
+ )
189
+ result["ssim_values"].append(
190
+ structural_similarity(original_frames_np[i], frames_np[i], data_range=1.0, channel_axis=0)
191
+ )
192
+
193
+ if save_frames and sample == 0:
194
+ save_dir = video_path.with_suffix("") / f"{timestamps_mode}_{backend}"
195
+ save_decoded_frames(imgs_dir, save_dir, frames, timestamps, fps)
196
+
197
+ return result
198
+
199
+ load_times_video_ms = []
200
+ load_times_images_ms = []
201
+ mse_values = []
202
+ psnr_values = []
203
+ ssim_values = []
204
+
205
+ # A sample is a single set of decoded frames specified by timestamps_mode (e.g. a single frame, 2 frames, etc.).
206
+ # For each sample, we record metrics (loading time and quality metrics) which are then averaged over all samples.
207
+ # As these samples are independent, we run them in parallel threads to speed up the benchmark.
208
+ # Use a single shared lock for all worker threads
209
+ shared_lock = Lock()
210
+ with ThreadPoolExecutor(max_workers=num_workers) as executor:
211
+ futures = [executor.submit(process_sample, i, shared_lock) for i in range(num_samples)]
212
+ for future in tqdm(as_completed(futures), total=num_samples, desc="samples", leave=False):
213
+ result = future.result()
214
+ load_times_video_ms.append(result["load_time_video_ms"])
215
+ load_times_images_ms.append(result["load_time_images_ms"])
216
+ psnr_values.extend(result["psnr_values"])
217
+ ssim_values.extend(result["ssim_values"])
218
+ mse_values.extend(result["mse_values"])
219
+
220
+ avg_load_time_video_ms = float(np.array(load_times_video_ms).mean())
221
+ avg_load_time_images_ms = float(np.array(load_times_images_ms).mean())
222
+ video_images_load_time_ratio = avg_load_time_video_ms / avg_load_time_images_ms
223
+
224
+ return {
225
+ "avg_load_time_video_ms": avg_load_time_video_ms,
226
+ "avg_load_time_images_ms": avg_load_time_images_ms,
227
+ "video_images_load_time_ratio": video_images_load_time_ratio,
228
+ "avg_mse": float(np.mean(mse_values)),
229
+ "avg_psnr": float(np.mean(psnr_values)),
230
+ "avg_ssim": float(np.mean(ssim_values)),
231
+ }
232
+
233
+
234
+ def benchmark_encoding_decoding(
235
+ dataset: LeRobotDataset,
236
+ video_path: Path,
237
+ imgs_dir: Path,
238
+ encoding_cfg: dict,
239
+ decoding_cfg: dict,
240
+ num_samples: int,
241
+ num_workers: int,
242
+ save_frames: bool,
243
+ overwrite: bool = False,
244
+ seed: int = 1337,
245
+ ) -> list[dict]:
246
+ fps = dataset.fps
247
+
248
+ if overwrite or not video_path.is_file():
249
+ tqdm.write(f"encoding {video_path}")
250
+ encode_video_frames(
251
+ imgs_dir=imgs_dir,
252
+ video_path=video_path,
253
+ fps=fps,
254
+ vcodec=encoding_cfg["vcodec"],
255
+ pix_fmt=encoding_cfg["pix_fmt"],
256
+ g=encoding_cfg.get("g"),
257
+ crf=encoding_cfg.get("crf"),
258
+ # fast_decode=encoding_cfg.get("fastdecode"),
259
+ overwrite=True,
260
+ )
261
+
262
+ episode_index = 0
263
+ ep_num_images = dataset.meta.episodes["length"][episode_index]
264
+ width, height = tuple(dataset[0][dataset.meta.camera_keys[0]].shape[-2:])
265
+ num_pixels = width * height
266
+ video_size_bytes = video_path.stat().st_size
267
+ images_size_bytes = get_directory_size(imgs_dir)
268
+ video_images_size_ratio = video_size_bytes / images_size_bytes
269
+
270
+ random.seed(seed)
271
+ benchmark_table = []
272
+ for timestamps_mode in tqdm(
273
+ decoding_cfg["timestamps_modes"], desc="decodings (timestamps_modes)", leave=False
274
+ ):
275
+ for backend in tqdm(decoding_cfg["backends"], desc="decodings (backends)", leave=False):
276
+ benchmark_row = benchmark_decoding(
277
+ imgs_dir,
278
+ video_path,
279
+ timestamps_mode,
280
+ backend,
281
+ ep_num_images,
282
+ fps,
283
+ num_samples,
284
+ num_workers,
285
+ save_frames,
286
+ )
287
+ benchmark_row.update(
288
+ **{
289
+ "repo_id": dataset.repo_id,
290
+ "resolution": f"{width} x {height}",
291
+ "num_pixels": num_pixels,
292
+ "video_size_bytes": video_size_bytes,
293
+ "images_size_bytes": images_size_bytes,
294
+ "video_images_size_ratio": video_images_size_ratio,
295
+ "timestamps_mode": timestamps_mode,
296
+ "backend": backend,
297
+ },
298
+ **encoding_cfg,
299
+ )
300
+ benchmark_table.append(benchmark_row)
301
+
302
+ return benchmark_table
303
+
304
+
305
+ def main(
306
+ output_dir: Path,
307
+ repo_ids: list[str],
308
+ vcodec: list[str],
309
+ pix_fmt: list[str],
310
+ g: list[int],
311
+ crf: list[int],
312
+ # fastdecode: list[int],
313
+ timestamps_modes: list[str],
314
+ backends: list[str],
315
+ num_samples: int,
316
+ num_workers: int,
317
+ save_frames: bool,
318
+ ):
319
+ check_datasets_formats(repo_ids)
320
+ encoding_benchmarks = {
321
+ "g": g,
322
+ "crf": crf,
323
+ # "fastdecode": fastdecode,
324
+ }
325
+ decoding_benchmarks = {
326
+ "timestamps_modes": timestamps_modes,
327
+ "backends": backends,
328
+ }
329
+ headers = ["repo_id", "resolution", "num_pixels"]
330
+ headers += list(BASE_ENCODING.keys())
331
+ headers += [
332
+ "timestamps_mode",
333
+ "backend",
334
+ "video_size_bytes",
335
+ "images_size_bytes",
336
+ "video_images_size_ratio",
337
+ "avg_load_time_video_ms",
338
+ "avg_load_time_images_ms",
339
+ "video_images_load_time_ratio",
340
+ "avg_mse",
341
+ "avg_psnr",
342
+ "avg_ssim",
343
+ ]
344
+ file_paths = []
345
+ for video_codec in tqdm(vcodec, desc="encodings (vcodec)"):
346
+ for pixel_format in tqdm(pix_fmt, desc="encodings (pix_fmt)", leave=False):
347
+ benchmark_table = []
348
+ for repo_id in tqdm(repo_ids, desc="encodings (datasets)", leave=False):
349
+ dataset = LeRobotDataset(repo_id)
350
+ imgs_dir = output_dir / "images" / dataset.repo_id.replace("/", "_")
351
+ # We only use the first episode
352
+ save_first_episode(imgs_dir, dataset)
353
+ for duet in [
354
+ dict(zip(encoding_benchmarks.keys(), unique_combination, strict=False))
355
+ for unique_combination in itertools.product(*encoding_benchmarks.values())
356
+ ]:
357
+ encoding_cfg = BASE_ENCODING.copy()
358
+ encoding_cfg["vcodec"] = video_codec
359
+ encoding_cfg["pix_fmt"] = pixel_format
360
+ for key, value in duet.items():
361
+ encoding_cfg[key] = value
362
+ args_path = Path("_".join(str(value) for value in encoding_cfg.values()))
363
+ video_path = output_dir / "videos" / args_path / f"{repo_id.replace('/', '_')}.mp4"
364
+ benchmark_table += benchmark_encoding_decoding(
365
+ dataset,
366
+ video_path,
367
+ imgs_dir,
368
+ encoding_cfg,
369
+ decoding_benchmarks,
370
+ num_samples,
371
+ num_workers,
372
+ save_frames,
373
+ )
374
+
375
+ # Save intermediate results
376
+ benchmark_df = pd.DataFrame(benchmark_table, columns=headers)
377
+ now = dt.datetime.now()
378
+ csv_path = (
379
+ output_dir
380
+ / f"{now:%Y-%m-%d}_{now:%H-%M-%S}_{video_codec}_{pixel_format}_{num_samples}-samples.csv"
381
+ )
382
+ benchmark_df.to_csv(csv_path, header=True, index=False)
383
+ file_paths.append(csv_path)
384
+ del benchmark_df
385
+
386
+ # Concatenate all results
387
+ df_list = [pd.read_csv(csv_path) for csv_path in file_paths]
388
+ concatenated_df = pd.concat(df_list, ignore_index=True)
389
+ concatenated_path = output_dir / f"{now:%Y-%m-%d}_{now:%H-%M-%S}_all_{num_samples}-samples.csv"
390
+ concatenated_df.to_csv(concatenated_path, header=True, index=False)
391
+
392
+
393
+ if __name__ == "__main__":
394
+ parser = argparse.ArgumentParser()
395
+ parser.add_argument(
396
+ "--output-dir",
397
+ type=Path,
398
+ default=Path("outputs/video_benchmark"),
399
+ help="Directory where the video benchmark outputs are written.",
400
+ )
401
+ parser.add_argument(
402
+ "--repo-ids",
403
+ type=str,
404
+ nargs="*",
405
+ default=[
406
+ "lerobot/pusht_image",
407
+ "lerobot/aloha_mobile_shrimp_image",
408
+ "lerobot/paris_street",
409
+ "lerobot/kitchen",
410
+ ],
411
+ help="Datasets repo-ids to test against. First episodes only are used. Must be images.",
412
+ )
413
+ parser.add_argument(
414
+ "--vcodec",
415
+ type=str,
416
+ nargs="*",
417
+ default=["h264", "hevc", "libsvtav1"],
418
+ help="Video codecs to be tested",
419
+ )
420
+ parser.add_argument(
421
+ "--pix-fmt",
422
+ type=str,
423
+ nargs="*",
424
+ default=["yuv444p", "yuv420p"],
425
+ help="Pixel formats (chroma subsampling) to be tested",
426
+ )
427
+ parser.add_argument(
428
+ "--g",
429
+ type=parse_int_or_none,
430
+ nargs="*",
431
+ default=[1, 2, 3, 4, 5, 6, 10, 15, 20, 40, 100, None],
432
+ help="Group of pictures sizes to be tested.",
433
+ )
434
+ parser.add_argument(
435
+ "--crf",
436
+ type=parse_int_or_none,
437
+ nargs="*",
438
+ default=[0, 5, 10, 15, 20, 25, 30, 40, 50, None],
439
+ help="Constant rate factors to be tested.",
440
+ )
441
+ # parser.add_argument(
442
+ # "--fastdecode",
443
+ # type=int,
444
+ # nargs="*",
445
+ # default=[0, 1],
446
+ # help="Use the fastdecode tuning option. 0 disables it. "
447
+ # "For libx264 and libx265/hevc, only 1 is possible. "
448
+ # "For libsvtav1, 1, 2 or 3 are possible values with a higher number meaning a faster decoding optimization",
449
+ # )
450
+ parser.add_argument(
451
+ "--timestamps-modes",
452
+ type=str,
453
+ nargs="*",
454
+ default=[
455
+ "1_frame",
456
+ "2_frames",
457
+ "2_frames_4_space",
458
+ "6_frames",
459
+ ],
460
+ help="Timestamps scenarios to be tested.",
461
+ )
462
+ parser.add_argument(
463
+ "--backends",
464
+ type=str,
465
+ nargs="*",
466
+ default=["torchcodec", "pyav"],
467
+ help="Torchvision decoding backend to be tested.",
468
+ )
469
+ parser.add_argument(
470
+ "--num-samples",
471
+ type=int,
472
+ default=50,
473
+ help="Number of samples for each encoding x decoding config.",
474
+ )
475
+ parser.add_argument(
476
+ "--num-workers",
477
+ type=int,
478
+ default=10,
479
+ help="Number of processes for parallelized sample processing.",
480
+ )
481
+ parser.add_argument(
482
+ "--save-frames",
483
+ type=int,
484
+ default=0,
485
+ help="Whether to save decoded frames or not. Enter a non-zero number for true.",
486
+ )
487
+ args = parser.parse_args()
488
+ main(**vars(args))
docker/Dockerfile.internal ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This Dockerfile is designed for HuggingFace internal CI environments
16
+ # that require GPU access. It starts from an NVIDIA CUDA base image.
17
+
18
+ # docker build -f docker/Dockerfile.internal -t lerobot-internal .
19
+
20
+ # Configure the base image for CI with GPU access
21
+ # TODO(Steven): Bump these versions
22
+ ARG CUDA_VERSION=12.4.1
23
+ ARG OS_VERSION=22.04
24
+ FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${OS_VERSION}
25
+
26
+ # Define Python version argument
27
+ ARG PYTHON_VERSION=3.10
28
+
29
+ # Configure environment variables
30
+ ENV DEBIAN_FRONTEND=noninteractive \
31
+ MUJOCO_GL=egl \
32
+ PATH=/lerobot/.venv/bin:$PATH \
33
+ CUDA_VISIBLE_DEVICES=0 \
34
+ TEST_TYPE=single_gpu \
35
+ DEVICE=cuda
36
+
37
+ # Install Python, system dependencies, and uv (as root)
38
+ RUN apt-get update && apt-get install -y --no-install-recommends \
39
+ software-properties-common build-essential git curl \
40
+ libglib2.0-0 libgl1-mesa-glx libegl1-mesa ffmpeg \
41
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
42
+ cmake pkg-config ninja-build \
43
+ && add-apt-repository -y ppa:deadsnakes/ppa \
44
+ && apt-get update \
45
+ && apt-get install -y --no-install-recommends \
46
+ python${PYTHON_VERSION} \
47
+ python${PYTHON_VERSION}-venv \
48
+ python${PYTHON_VERSION}-dev \
49
+ && curl -LsSf https://astral.sh/uv/install.sh | sh \
50
+ && mv /root/.local/bin/uv /usr/local/bin/uv \
51
+ && useradd --create-home --shell /bin/bash user_lerobot \
52
+ && usermod -aG sudo user_lerobot \
53
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
54
+
55
+ # Create application directory and set permissions
56
+ WORKDIR /lerobot
57
+ RUN chown -R user_lerobot:user_lerobot /lerobot
58
+
59
+ # Switch to the non-root user
60
+ USER user_lerobot
61
+
62
+ # Environment variables for the testing
63
+ ENV HOME=/home/user_lerobot \
64
+ HF_HOME=/home/user_lerobot/.cache/huggingface \
65
+ HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
66
+ TORCH_HOME=/home/user_lerobot/.cache/torch \
67
+ TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
68
+
69
+ # Create the virtual environment
70
+ # We use a virtual environment inside the container—even though the container itself \
71
+ # provides isolation—to ensure compatibility with the cluster and to prevent \
72
+ # issues with MuJoCo and OpenGL drivers.
73
+ RUN uv venv --python python${PYTHON_VERSION}
74
+
75
+ # Install Python dependencies for caching
76
+ COPY --chown=user_lerobot:user_lerobot setup.py pyproject.toml README.md MANIFEST.in ./
77
+ COPY --chown=user_lerobot:user_lerobot src/ src/
78
+
79
+ ARG UNBOUND_DEPS=false
80
+
81
+ RUN if [ "$UNBOUND_DEPS" = "true" ]; then \
82
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml; \
83
+ echo "Dependencies unbound:" && cat pyproject.toml; \
84
+ fi
85
+
86
+ RUN uv pip install --no-cache ".[all]"
87
+
88
+ # Copy the rest of the application source code
89
+ # Make sure to have the git-LFS files for testing
90
+ COPY --chown=user_lerobot:user_lerobot . .
91
+
92
+ # Set the default command
93
+ CMD ["/bin/bash"]
docker/Dockerfile.user ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This Dockerfile is designed for a lerobot user who wants to
16
+ # experiment with the project. It starts from an Python Slim base image.
17
+
18
+ # docker build -f docker/Dockerfile.user -t lerobot-user .
19
+ # docker run -it --rm lerobot-user
20
+
21
+ # Configure the base image
22
+ ARG PYTHON_VERSION=3.10
23
+ FROM python:${PYTHON_VERSION}-slim
24
+
25
+ # Configure environment variables
26
+ ENV DEBIAN_FRONTEND=noninteractive \
27
+ MUJOCO_GL=egl \
28
+ PATH=/lerobot/.venv/bin:$PATH
29
+
30
+ # Install system dependencies and uv (as root)
31
+ RUN apt-get update && apt-get install -y --no-install-recommends \
32
+ build-essential git curl libglib2.0-0 libegl1-mesa-dev ffmpeg \
33
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
34
+ cmake pkg-config ninja-build \
35
+ && curl -LsSf https://astral.sh/uv/install.sh | sh \
36
+ && mv /root/.local/bin/uv /usr/local/bin/uv \
37
+ && useradd --create-home --shell /bin/bash user_lerobot \
38
+ && usermod -aG sudo user_lerobot \
39
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
40
+
41
+ # Create application directory and set permissions
42
+ WORKDIR /lerobot
43
+ RUN chown -R user_lerobot:user_lerobot /lerobot
44
+
45
+ # Switch to the non-root user
46
+ USER user_lerobot
47
+
48
+ # Environment variables for the testing
49
+ ENV HOME=/home/user_lerobot \
50
+ HF_HOME=/home/user_lerobot/.cache/huggingface \
51
+ HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
52
+ TORCH_HOME=/home/user_lerobot/.cache/torch \
53
+ TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
54
+
55
+ # Create the virtual environment
56
+ # We use a virtual environment inside the container—even though the container itself \
57
+ # provides isolation—to closely resemble local development and allow users to \
58
+ # run other Python projects in the same container without dependency conflicts.
59
+ RUN uv venv
60
+
61
+ # Install Python dependencies for caching
62
+ COPY --chown=user_lerobot:user_lerobot setup.py pyproject.toml README.md MANIFEST.in ./
63
+ COPY --chown=user_lerobot:user_lerobot src/ src/
64
+
65
+ ARG UNBOUND_DEPS=false
66
+
67
+ RUN if [ "$UNBOUND_DEPS" = "true" ]; then \
68
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml; \
69
+ echo "Dependencies unbound:" && cat pyproject.toml; \
70
+ fi
71
+
72
+ RUN uv pip install --no-cache ".[all]"
73
+
74
+ # Copy the rest of the application code
75
+ # Make sure to have the git-LFS files for testing
76
+ COPY --chown=user_lerobot:user_lerobot . .
77
+
78
+ # Set the default command
79
+ CMD ["/bin/bash"]
docs-requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # docs-requirements.txt
2
+ hf-doc-builder @ git+https://github.com/huggingface/doc-builder.git@main
3
+ watchdog>=6.0.0
docs/README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!---
2
+ Copyright 2020 The HuggingFace Team. All rights reserved.
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ -->
16
+
17
+ # Generating the documentation
18
+
19
+ To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
20
+ you can install them with the following command, at the root of the code repository:
21
+
22
+ ```bash
23
+ pip install -e . -r docs-requirements.txt
24
+ ```
25
+
26
+ You will also need `nodejs`. Please refer to their [installation page](https://nodejs.org/en/download)
27
+
28
+ ---
29
+
30
+ **NOTE**
31
+
32
+ You only need to generate the documentation to inspect it locally (if you're planning changes and want to
33
+ check how they look before committing for instance). You don't have to `git commit` the built documentation.
34
+
35
+ ---
36
+
37
+ ## Building the documentation
38
+
39
+ Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
40
+ typing the following command:
41
+
42
+ ```bash
43
+ doc-builder build lerobot docs/source/ --build_dir ~/tmp/test-build
44
+ ```
45
+
46
+ You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
47
+ the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
48
+ Markdown editor.
49
+
50
+ ## Previewing the documentation
51
+
52
+ To preview the docs, first install the `watchdog` module with:
53
+
54
+ ```bash
55
+ pip install watchdog
56
+ ```
57
+
58
+ Then run the following command:
59
+
60
+ ```bash
61
+ doc-builder preview lerobot docs/source/
62
+ ```
63
+
64
+ The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
65
+
66
+ ---
67
+
68
+ **NOTE**
69
+
70
+ The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
71
+
72
+ ---
73
+
74
+ ## Adding a new element to the navigation bar
75
+
76
+ Accepted files are Markdown (.md).
77
+
78
+ Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
79
+ the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/lerobot/blob/main/docs/source/_toctree.yml) file.
80
+
81
+ ## Renaming section headers and moving sections
82
+
83
+ It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
84
+
85
+ Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
86
+
87
+ So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
88
+
89
+ ```
90
+ Sections that were moved:
91
+
92
+ [ <a href="#section-b">Section A</a><a id="section-a"></a> ]
93
+ ```
94
+
95
+ and of course, if you moved it to another file, then:
96
+
97
+ ```
98
+ Sections that were moved:
99
+
100
+ [ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
101
+ ```
102
+
103
+ Use the relative style to link to the new file so that the versioned docs continue to work.
104
+
105
+ For an example of a rich moved sections set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md).
106
+
107
+ ### Adding a new tutorial
108
+
109
+ Adding a new tutorial or section is done in two steps:
110
+
111
+ - Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
112
+ - Link that file in `./source/_toctree.yml` on the correct toc-tree.
113
+
114
+ Make sure to put your new file under the proper section. If you have a doubt, feel free to ask in a Github Issue or PR.
115
+
116
+ ### Writing source documentation
117
+
118
+ Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
119
+ and objects like True, None or any strings should usually be put in `code`.
120
+
121
+ #### Writing a multi-line code block
122
+
123
+ Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
124
+
125
+ ````
126
+ ```
127
+ # first line of code
128
+ # second line
129
+ # etc
130
+ ```
131
+ ````
132
+
133
+ #### Adding an image
134
+
135
+ Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
136
+ the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
137
+ them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
138
+ If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
139
+ to this dataset.
docs/source/_toctree.yml ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ - sections:
2
+ - local: index
3
+ title: LeRobot
4
+ - local: installation
5
+ title: Installation
6
+ title: Get started
7
+ - sections:
8
+ - local: il_robots
9
+ title: Imitation Learning for Robots
10
+ - local: bring_your_own_policies
11
+ title: Bring Your Own Policies
12
+ - local: integrate_hardware
13
+ title: Bring Your Own Hardware
14
+ - local: hilserl
15
+ title: Train a Robot with RL
16
+ - local: hilserl_sim
17
+ title: Train RL in Simulation
18
+ - local: multi_gpu_training
19
+ title: Multi GPU training
20
+ - local: peft_training
21
+ title: Training with PEFT (e.g., LoRA)
22
+ title: "Tutorials"
23
+ - sections:
24
+ - local: lerobot-dataset-v3
25
+ title: Using LeRobotDataset
26
+ - local: porting_datasets_v3
27
+ title: Porting Large Datasets
28
+ - local: using_dataset_tools
29
+ title: Using the Dataset Tools
30
+ - local: dataset_subtask
31
+ title: Using Subtasks in the Dataset
32
+ title: "Datasets"
33
+ - sections:
34
+ - local: act
35
+ title: ACT
36
+ - local: smolvla
37
+ title: SmolVLA
38
+ - local: pi0
39
+ title: π₀ (Pi0)
40
+ - local: pi0fast
41
+ title: π₀-FAST (Pi0Fast)
42
+ - local: pi05
43
+ title: π₀.₅ (Pi05)
44
+ - local: groot
45
+ title: NVIDIA GR00T N1.5
46
+ - local: xvla
47
+ title: X-VLA
48
+ - local: walloss
49
+ title: WALL-OSS
50
+ title: "Policies"
51
+ - sections:
52
+ - local: sarm
53
+ title: SARM
54
+ title: "Reward Models"
55
+ - sections:
56
+ - local: async
57
+ title: Use Async Inference
58
+ - local: rtc
59
+ title: Real-Time Chunking (RTC)
60
+ title: "Inference"
61
+ - sections:
62
+ - local: envhub
63
+ title: Environments from the Hub
64
+ - local: envhub_leisaac
65
+ title: Control & Train Robots in Sim (LeIsaac)
66
+ - local: envhub_isaaclab_arena
67
+ title: NVIDIA IsaacLab Arena Environments
68
+ - local: libero
69
+ title: Using Libero
70
+ - local: metaworld
71
+ title: Using MetaWorld
72
+ title: "Simulation"
73
+ - sections:
74
+ - local: introduction_processors
75
+ title: Introduction to Robot Processors
76
+ - local: debug_processor_pipeline
77
+ title: Debug your processor pipeline
78
+ - local: implement_your_own_processor
79
+ title: Implement your own processor
80
+ - local: processors_robots_teleop
81
+ title: Processors for Robots and Teleoperators
82
+ - local: env_processor
83
+ title: Environment Processors
84
+ title: "Robot Processors"
85
+ - sections:
86
+ - local: so101
87
+ title: SO-101
88
+ - local: so100
89
+ title: SO-100
90
+ - local: koch
91
+ title: Koch v1.1
92
+ - local: lekiwi
93
+ title: LeKiwi
94
+ - local: hope_jr
95
+ title: Hope Jr
96
+ - local: reachy2
97
+ title: Reachy 2
98
+ - local: unitree_g1
99
+ title: Unitree G1
100
+ - local: earthrover_mini_plus
101
+ title: Earth Rover Mini
102
+ - local: omx
103
+ title: OMX
104
+ - local: openarm
105
+ title: OpenArm
106
+ title: "Robots"
107
+ - sections:
108
+ - local: phone_teleop
109
+ title: Phone
110
+ title: "Teleoperators"
111
+ - sections:
112
+ - local: cameras
113
+ title: Cameras
114
+ title: "Sensors"
115
+ - sections:
116
+ - local: torch_accelerators
117
+ title: PyTorch accelerators
118
+ title: "Supported Hardware"
119
+ - sections:
120
+ - local: notebooks
121
+ title: Notebooks
122
+ - local: feetech
123
+ title: Updating Feetech Firmware
124
+ - local: damiao
125
+ title: Damiao Motors and CAN Bus
126
+ title: "Resources"
127
+ - sections:
128
+ - local: contributing
129
+ title: Contribute to LeRobot
130
+ - local: backwardcomp
131
+ title: Backward compatibility
132
+ title: "About"
docs/source/act.mdx ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACT (Action Chunking with Transformers)
2
+
3
+ ACT is a **lightweight and efficient policy for imitation learning**, especially well-suited for fine-grained manipulation tasks. It's the **first model we recommend when you're starting out** with LeRobot due to its fast training time, low computational requirements, and strong performance.
4
+
5
+ <div class="video-container">
6
+ <iframe
7
+ width="100%"
8
+ height="415"
9
+ src="https://www.youtube.com/embed/ft73x0LfGpM"
10
+ title="LeRobot ACT Tutorial"
11
+ frameborder="0"
12
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
13
+ allowfullscreen
14
+ ></iframe>
15
+ </div>
16
+
17
+ _Watch this tutorial from the LeRobot team to learn how ACT works: [LeRobot ACT Tutorial](https://www.youtube.com/watch?v=ft73x0LfGpM)_
18
+
19
+ ## Model Overview
20
+
21
+ Action Chunking with Transformers (ACT) was introduced in the paper [Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware](https://arxiv.org/abs/2304.13705) by Zhao et al. The policy was designed to enable precise, contact-rich manipulation tasks using affordable hardware and minimal demonstration data.
22
+
23
+ ### Why ACT is Great for Beginners
24
+
25
+ ACT stands out as an excellent starting point for several reasons:
26
+
27
+ - **Fast Training**: Trains in a few hours on a single GPU
28
+ - **Lightweight**: Only ~80M parameters, making it efficient and easy to work with
29
+ - **Data Efficient**: Often achieves high success rates with just 50 demonstrations
30
+
31
+ ### Architecture
32
+
33
+ ACT uses a transformer-based architecture with three main components:
34
+
35
+ 1. **Vision Backbone**: ResNet-18 processes images from multiple camera viewpoints
36
+ 2. **Transformer Encoder**: Synthesizes information from camera features, joint positions, and a learned latent variable
37
+ 3. **Transformer Decoder**: Generates coherent action sequences using cross-attention
38
+
39
+ The policy takes as input:
40
+
41
+ - Multiple RGB images (e.g., from wrist cameras, front/top cameras)
42
+ - Current robot joint positions
43
+ - A latent style variable `z` (learned during training, set to zero during inference)
44
+
45
+ And outputs a chunk of `k` future action sequences.
46
+
47
+ ## Installation Requirements
48
+
49
+ 1. Install LeRobot by following our [Installation Guide](./installation).
50
+ 2. ACT is included in the base LeRobot installation, so no additional dependencies are needed!
51
+
52
+ ## Training ACT
53
+
54
+ ACT works seamlessly with the standard LeRobot training pipeline. Here's a complete example for training ACT on your dataset:
55
+
56
+ ```bash
57
+ lerobot-train \
58
+ --dataset.repo_id=${HF_USER}/your_dataset \
59
+ --policy.type=act \
60
+ --output_dir=outputs/train/act_your_dataset \
61
+ --job_name=act_your_dataset \
62
+ --policy.device=cuda \
63
+ --wandb.enable=true \
64
+ --policy.repo_id=${HF_USER}/act_policy
65
+ ```
66
+
67
+ ### Training Tips
68
+
69
+ 1. **Start with defaults**: ACT's default hyperparameters work well for most tasks
70
+ 2. **Training duration**: Expect a few hours for 100k training steps on a single GPU
71
+ 3. **Batch size**: Start with batch size 8 and adjust based on your GPU memory
72
+
73
+ ### Train using Google Colab
74
+
75
+ If your local computer doesn't have a powerful GPU, you can utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
76
+
77
+ ## Evaluating ACT
78
+
79
+ Once training is complete, you can evaluate your ACT policy using the `lerobot-record` command with your trained policy. This will run inference and record evaluation episodes:
80
+
81
+ ```bash
82
+ lerobot-record \
83
+ --robot.type=so100_follower \
84
+ --robot.port=/dev/ttyACM0 \
85
+ --robot.id=my_robot \
86
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
87
+ --display_data=true \
88
+ --dataset.repo_id=${HF_USER}/eval_act_your_dataset \
89
+ --dataset.num_episodes=10 \
90
+ --dataset.single_task="Your task description" \
91
+ --policy.path=${HF_USER}/act_policy
92
+ ```
docs/source/async.mdx ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Asynchronous Inference
2
+
3
+ With our [SmolVLA](https://huggingface.co/papers/2506.01844) we introduced a new way to run inference on real-world robots, **decoupling action prediction from action execution**.
4
+ In this tutorial, we'll show how to use asynchronous inference (_async inference_) using a finetuned version of SmolVLA, and all the policies supported by LeRobot.
5
+ **Try async inference with all the policies** supported by LeRobot!
6
+
7
+ **What you'll learn:**
8
+
9
+ 1. Why asynchronous inference matters and how it compares to, more traditional, sequential inference.
10
+ 2. How to spin-up a `PolicyServer` and connect a `RobotClient` from the same machine, and even over the network.
11
+ 3. How to tune key parameters (`actions_per_chunk`, `chunk_size_threshold`) for your robot and policy.
12
+
13
+ If you get stuck, hop into our [Discord community](https://discord.gg/s3KuuzsPFb)!
14
+
15
+ In a nutshell: with _async inference_, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours.
16
+ This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions.
17
+
18
+ ---
19
+
20
+ ## Getting started with async inference
21
+
22
+ You can read more information on asynchronous inference in our [blogpost](https://huggingface.co/blog/async-robot-inference). This guide is designed to help you quickly set up and run asynchronous inference in your environment.
23
+
24
+ First, install `lerobot` with the `async` tag, to install the extra dependencies required to run async inference.
25
+
26
+ ```shell
27
+ pip install -e ".[async]"
28
+ ```
29
+
30
+ Then, spin up a policy server (in one terminal, or in a separate machine) specifying the host address and port for the client to connect to.
31
+ You can spin up a policy server running:
32
+
33
+ ```shell
34
+ python -m lerobot.async_inference.policy_server \
35
+ --host=127.0.0.1 \
36
+ --port=8080
37
+ ```
38
+
39
+ This will start a policy server listening on `127.0.0.1:8080` (`localhost`, port 8080). At this stage, the policy server is empty, as all information related to which policy to run and with which parameters are specified during the first handshake with the client. Spin up a client with:
40
+
41
+ ```shell
42
+ python -m lerobot.async_inference.robot_client \
43
+ --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
44
+ --robot.type=so100_follower \ # ROBOT: your robot type
45
+ --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
46
+ --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
47
+ --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
48
+ --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
49
+ --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
50
+ --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
51
+ --policy_device=mps \ # POLICY: the device to run the policy on, on the server
52
+ --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
53
+ --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
54
+ --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
55
+ --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
56
+ ```
57
+
58
+ In summary, you need to specify instructions for:
59
+
60
+ - `SERVER`: the address and port of the policy server
61
+ - `ROBOT`: the type of robot to connect to, the port to connect to, and the local `id` of the robot
62
+ - `POLICY`: the type of policy to run, and the model name/path on server to the checkpoint to run. You also need to specify which device should the sever be using, and how many actions to output at once (capped at the policy max actions value).
63
+ - `CLIENT`: the threshold for the chunk size before sending a new observation to the server, and the function to aggregate actions on overlapping portions. Optionally, you can also visualize the queue size at runtime, to help you tune the `CLIENT` parameters.
64
+
65
+ Importantly,
66
+
67
+ - `actions_per_chunk` and `chunk_size_threshold` are key parameters to tune for your setup.
68
+ - `aggregate_fn_name` is the function to aggregate actions on overlapping portions. You can either add a new one to a registry of functions, or add your own in `robot_client.py` (see [here](NOTE:addlinktoLOC))
69
+ - `debug_visualize_queue_size` is a useful tool to tune the `CLIENT` parameters.
70
+
71
+ ## Done! You should see your robot moving around by now 😉
72
+
73
+ ## Async vs. synchronous inference
74
+
75
+ Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in _idle frames_, frames where the robot awaits idle the policy's output: a new action chunk.
76
+ In turn, inference is plagued by evident real-time lags, where the robot simply stops acting due to the lack of available actions.
77
+ With robotics models increasing in size, this problem risks becoming only more severe.
78
+
79
+ <p align="center">
80
+ <img
81
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/sync.png"
82
+ width="80%"
83
+ ></img>
84
+ </p>
85
+ <p align="center">
86
+ <i>Synchronous inference</i> makes the robot idle while the policy is
87
+ computing the next chunk of actions.
88
+ </p>
89
+
90
+ To overcome this, we design async inference, a paradigm where action planning and execution are decoupled, resulting in (1) higher adaptability and, most importantly, (2) no idle frames.
91
+ Crucially, with async inference, the next action chunk is computed _before_ the current one is exhausted, resulting in no idleness.
92
+ Higher adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan and a tighter control loop.
93
+
94
+ <p align="center">
95
+ <img
96
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/async.png"
97
+ width="80%"
98
+ ></img>
99
+ </p>
100
+ <p align="center">
101
+ <i>Asynchronous inference</i> results in no idleness because the next chunk is
102
+ computed before the current chunk is exhausted.
103
+ </p>
104
+
105
+ ---
106
+
107
+ ## Start the Policy Server
108
+
109
+ Policy servers are wrappers around a `PreTrainedPolicy` interfacing them with observations coming from a robot client.
110
+ Policy servers are initialized as empty containers which are populated with the requested policy specified in the initial handshake between the robot client and the policy server.
111
+ As such, spinning up a policy server is as easy as specifying the host address and port. If you're running the policy server on the same machine as the robot client, you can use `localhost` as the host address.
112
+
113
+ <hfoptions id="start_policy_server">
114
+ <hfoption id="Command">
115
+ ```bash
116
+ python -m lerobot.async_inference.policy_server \
117
+ --host=127.0.0.1 \
118
+ --port=8080
119
+ ```
120
+ </hfoption>
121
+ <hfoption id="API example">
122
+
123
+ <!-- prettier-ignore-start -->
124
+ ```python
125
+ from lerobot.async_inference.configs import PolicyServerConfig
126
+ from lerobot.async_inference.policy_server import serve
127
+
128
+ config = PolicyServerConfig(
129
+ host="localhost",
130
+ port=8080,
131
+ )
132
+ serve(config)
133
+ ```
134
+ <!-- prettier-ignore-end -->
135
+
136
+ </hfoption>
137
+ </hfoptions>
138
+
139
+ This listens on `localhost:8080` for an incoming connection from the associated`RobotClient`, which will communicate which policy to run during the first client-server handshake.
140
+
141
+ ---
142
+
143
+ ## Launch the Robot Client
144
+
145
+ `RobotClient` is a wrapper around a `Robot` instance, which `RobotClient` connects to the (possibly remote) `PolicyServer`.
146
+ The `RobotClient` streams observations to the `PolicyServer`, and receives action chunks obtained running inference on the server (which we assume to have better computational resources than the robot controller).
147
+
148
+ <hfoptions id="start_robot_client">
149
+ <hfoption id="Command">
150
+ ```bash
151
+ python -m lerobot.async_inference.robot_client \
152
+ --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
153
+ --robot.type=so100_follower \ # ROBOT: your robot type
154
+ --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
155
+ --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
156
+ --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
157
+ --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
158
+ --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
159
+ --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
160
+ --policy_device=mps \ # POLICY: the device to run the policy on, on the server
161
+ --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
162
+ --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
163
+ --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
164
+ --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
165
+ ```
166
+ </hfoption>
167
+ <hfoption id="API example">
168
+
169
+ <!-- prettier-ignore-start -->
170
+ ```python
171
+ import threading
172
+ from lerobot.robots.so_follower import SO100FollowerConfig
173
+ from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
174
+ from lerobot.async_inference.configs import RobotClientConfig
175
+ from lerobot.async_inference.robot_client import RobotClient
176
+ from lerobot.async_inference.helpers import visualize_action_queue_size
177
+
178
+ # 1. Create the robot instance
179
+ """Check out the cameras available in your setup by running `python lerobot/find_cameras.py`"""
180
+ # these cameras must match the ones expected by the policy
181
+ # check the config.json on the Hub for the policy you are using
182
+ camera_cfg = {
183
+ "top": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
184
+ "side": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30)
185
+ }
186
+
187
+ robot_cfg = SO100FollowerConfig(
188
+ port="/dev/tty.usbmodem585A0076841",
189
+ id="follower_so100",
190
+ cameras=camera_cfg
191
+ )
192
+
193
+ # 3. Create client configuration
194
+ client_cfg = RobotClientConfig(
195
+ robot=robot_cfg,
196
+ server_address="localhost:8080",
197
+ policy_device="mps",
198
+ client_device="cpu",
199
+ policy_type="smolvla",
200
+ pretrained_name_or_path="<user>/smolvla_async",
201
+ chunk_size_threshold=0.5,
202
+ actions_per_chunk=50, # make sure this is less than the max actions of the policy
203
+ )
204
+
205
+ # 4. Create and start client
206
+ client = RobotClient(client_cfg)
207
+
208
+ # 5. Specify the task
209
+ task = "Don't do anything, stay still"
210
+
211
+ if client.start():
212
+ # Start action receiver thread
213
+ action_receiver_thread = threading.Thread(target=client.receive_actions, daemon=True)
214
+ action_receiver_thread.start()
215
+
216
+ try:
217
+ # Run the control loop
218
+ client.control_loop(task)
219
+ except KeyboardInterrupt:
220
+ client.stop()
221
+ action_receiver_thread.join()
222
+ # (Optionally) plot the action queue size
223
+ visualize_action_queue_size(client.action_queue_size)
224
+ ```
225
+ <!-- prettier-ignore-end -->
226
+
227
+ </hfoption>
228
+ </hfoptions>
229
+
230
+ The following two parameters are key in every setup:
231
+
232
+ <table>
233
+ <thead>
234
+ <tr>
235
+ <th>Hyperparameter</th>
236
+ <th>Default</th>
237
+ <th>What it does</th>
238
+ </tr>
239
+ </thead>
240
+ <tbody>
241
+ <tr>
242
+ <td>
243
+ <code>actions_per_chunk</code>
244
+ </td>
245
+ <td>50</td>
246
+ <td>
247
+ How many actions the policy outputs at once. Typical values: 10-50.
248
+ </td>
249
+ </tr>
250
+ <tr>
251
+ <td>
252
+ <code>chunk_size_threshold</code>
253
+ </td>
254
+ <td>0.7</td>
255
+ <td>
256
+ When the queue is ≤ 50% full, the client sends a fresh observation.
257
+ Value in [0, 1].
258
+ </td>
259
+ </tr>
260
+ </tbody>
261
+ </table>
262
+
263
+ <Tip>
264
+ Different values of `actions_per_chunk` and `chunk_size_threshold` do result
265
+ in different behaviours.
266
+ </Tip>
267
+
268
+ On the one hand, increasing the value of `actions_per_chunk` will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed.
269
+ However, larger values of `actions_per_chunk` might also result in less precise actions, due to the compounding errors consequent to predicting actions over longer timespans.
270
+
271
+ On the other hand, increasing the value of `chunk_size_threshold` will result in sending out to the `PolicyServer` observations for inference more often, resulting in a larger number of updates action chunks, overlapping on significant portions. This results in high adaptability, in the limit predicting one action chunk for each observation, which is in turn only marginally consumed while a new one is produced.
272
+ This option does also put more pressure on the inference pipeline, as a consequence of the many requests. Conversely, values of `chunk_size_threshold` close to 0.0 collapse to the synchronous edge case, whereby new observations are only sent out whenever the current chunk is exhausted.
273
+
274
+ We found the default values of `actions_per_chunk` and `chunk_size_threshold` to work well in the experiments we developed for the [SmolVLA paper](https://huggingface.co/papers/2506.01844), but recommend experimenting with different values to find the best fit for your setup.
275
+
276
+ ### Tuning async inference for your setup
277
+
278
+ 1. **Choose your computational resources carefully.** [PI0](https://huggingface.co/lerobot/pi0) occupies 14GB of memory at inference time, while [SmolVLA](https://huggingface.co/lerobot/smolvla_base) requires only ~2GB. You should identify the best computational resource for your use case keeping in mind smaller policies require less computational resources. The combination of policy and device used (CPU-intensive, using MPS, or the number of CUDA cores on a given NVIDIA GPU) directly impacts the average inference latency you should expect.
279
+ 2. **Adjust your `fps` based on inference latency.** While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue.
280
+ 3. **Adjust `chunk_size_threshold`**.
281
+ - Values closer to `0.0` result in almost sequential behavior. Values closer to `1.0` → send observation every step (more bandwidth, relies on good world-model).
282
+ - We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug_visualize_queue_size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup.
283
+
284
+ <p align="center">
285
+ <img
286
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/queues.png"
287
+ width="80%"
288
+ ></img>
289
+ </p>
290
+ <p align="center">
291
+ <i>
292
+ The action queue size is plotted at runtime when the
293
+ `--debug_visualize_queue_size` flag is passed, for various levels of
294
+ `chunk_size_threshold` (`g` in the SmolVLA paper).
295
+ </i>
296
+ </p>
297
+
298
+ ---
299
+
300
+ ## Conclusion
301
+
302
+ Asynchronous inference represents a significant advancement in real-time robotics control, addressing the fundamental challenge of inference latency that has long plagued robotics applications. Through this tutorial, you've learned how to implement a complete async inference pipeline that eliminates idle frames and enables smoother, more reactive robot behaviors.
303
+
304
+ **Key Takeaways:**
305
+
306
+ - **Paradigm Shift**: Async inference decouples action prediction from execution, allowing robots to continue acting while new action chunks are computed in parallel
307
+ - **Performance Benefits**: Eliminates "wait-for-inference" lags that are inherent in synchronous approaches, becoming increasingly important as policy models grow larger
308
+ - **Flexible Architecture**: The server-client design enables distributed computing, where inference can run on powerful remote hardware while maintaining real-time robot control
309
+ - **Tunable Parameters**: Success depends on properly configuring `actions_per_chunk` and `chunk_size_threshold` for your specific hardware, policy, and task requirements
310
+ - **Universal Compatibility**: Works with all LeRobot-supported policies, from lightweight ACT models to vision-language models like SmolVLA
311
+
312
+ Start experimenting with the default parameters, monitor your action queue sizes, and iteratively refine your setup to achieve optimal performance for your specific use case.
313
+ If you want to discuss this further, hop into our [Discord community](https://discord.gg/s3KuuzsPFb), or open an issue on our [GitHub repository](https://github.com/lerobot/lerobot/issues).
docs/source/backwardcomp.mdx ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Backward compatibility
2
+
3
+ ## Policy Normalization Migration (PR #1452)
4
+
5
+ **Breaking Change**: LeRobot policies no longer have built-in normalization layers embedded in their weights. Normalization is now handled by external `PolicyProcessorPipeline` components.
6
+
7
+ ### What changed?
8
+
9
+ | | Before PR #1452 | After PR #1452 |
10
+ | -------------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
11
+ | **Normalization Location** | Embedded in model weights (`normalize_inputs.*`) | External `PolicyProcessorPipeline` components |
12
+ | **Model State Dict** | Contains normalization statistics | **Clean weights only** - no normalization parameters |
13
+ | **Usage** | `policy(batch)` handles everything | `preprocessor(batch)` → `policy(...)` → `postprocessor(...)` |
14
+
15
+ ### Impact on existing models
16
+
17
+ - Models trained **before** PR #1452 have normalization embedded in their weights
18
+ - These models need migration to work with the new `PolicyProcessorPipeline` system
19
+ - The migration extracts normalization statistics and creates separate processor pipelines
20
+
21
+ ### Migrating old models
22
+
23
+ Use the migration script to convert models with embedded normalization:
24
+
25
+ ```shell
26
+ python src/lerobot/processor/migrate_policy_normalization.py \
27
+ --pretrained-path lerobot/act_aloha_sim_transfer_cube_human \
28
+ --push-to-hub \
29
+ --branch migrated
30
+ ```
31
+
32
+ The script:
33
+
34
+ 1. **Extracts** normalization statistics from model weights
35
+ 2. **Creates** external preprocessor and postprocessor pipelines
36
+ 3. **Removes** normalization layers from model weights
37
+ 4. **Saves** clean model + processor pipelines
38
+ 5. **Pushes** to Hub with automatic PR creation
39
+
40
+ ### Using migrated models
41
+
42
+ ```python
43
+ # New usage pattern (after migration)
44
+ from lerobot.policies.factory import make_policy, make_pre_post_processors
45
+
46
+ # Load model and processors separately
47
+ policy = make_policy(config, ds_meta=dataset.meta)
48
+ preprocessor, postprocessor = make_pre_post_processors(
49
+ policy_cfg=config,
50
+ dataset_stats=dataset.meta.stats
51
+ )
52
+
53
+ # Process data through pipeline
54
+ processed_batch = preprocessor(raw_batch)
55
+ action = policy.select_action(processed_batch)
56
+ final_action = postprocessor(action)
57
+ ```
58
+
59
+ ## Hardware API redesign
60
+
61
+ PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot calibration but is **not backward-compatible**. Below is a overview of what changed and how you can continue to work with datasets created before this pull request.
62
+
63
+ ### What changed?
64
+
65
+ | | Before PR #777 | After PR #777 |
66
+ | --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ |
67
+ | **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `–100...100` Gripper: `0...100` |
68
+ | **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** |
69
+ | **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero |
70
+
71
+ ---
72
+
73
+ ### Impact on existing datasets
74
+
75
+ - Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
76
+ - Joint angles are offset and incorrectly normalized.
77
+ - Any models directly finetuned or trained on the old data will need their inputs and outputs converted.
78
+
79
+ ### Using datasets made with the previous calibration system
80
+
81
+ We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`.
82
+ Below we take you through the modifications that are done in the example script to make the previous calibration datasets work.
83
+
84
+ ```diff
85
+ + key = f"{name.removeprefix('main_')}.pos"
86
+ action[key] = action_array[i].item()
87
+ + action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
88
+ + action["elbow_flex.pos"] -= 90
89
+ ```
90
+
91
+ Let's break this down.
92
+ New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix:
93
+
94
+ <!-- prettier-ignore-start -->
95
+ ```python
96
+ key = f"{name.removeprefix('main_')}.pos"
97
+ ```
98
+ <!-- prettier-ignore-end -->
99
+
100
+ For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code.
101
+
102
+ <!-- prettier-ignore-start -->
103
+ ```python
104
+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
105
+ ```
106
+ <!-- prettier-ignore-end -->
107
+
108
+ For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code.
109
+
110
+ <!-- prettier-ignore-start -->
111
+ ```python
112
+ action["elbow_flex.pos"] -= 90
113
+ ```
114
+ <!-- prettier-ignore-end -->
115
+
116
+ To use degrees normalization we then set the `--robot.use_degrees` option to `true`.
117
+
118
+ ```diff
119
+ python examples/backward_compatibility/replay.py \
120
+ --robot.type=so101_follower \
121
+ --robot.port=/dev/tty.usbmodem5A460814411 \
122
+ --robot.id=blue \
123
+ + --robot.use_degrees=true \
124
+ --dataset.repo_id=my_dataset_id \
125
+ --dataset.episode=0
126
+ ```
127
+
128
+ ### Using policies trained with the previous calibration system
129
+
130
+ Policies output actions in the same format as the datasets (`torch.Tensors`). Therefore, the same transformations should be applied.
131
+
132
+ To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above.
133
+ Then, add these same transformations on your inference script (shown here in the `record.py` script):
134
+
135
+ ```diff
136
+ action_values = predict_action(
137
+ observation_frame,
138
+ policy,
139
+ get_safe_torch_device(policy.config.device),
140
+ policy.config.use_amp,
141
+ task=single_task,
142
+ robot_type=robot.robot_type,
143
+ )
144
+ action = {key: action_values[i].item() for i, key in enumerate(robot.action_features)}
145
+
146
+ + action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
147
+ + action["elbow_flex.pos"] -= 90
148
+ robot.send_action(action)
149
+ ```
150
+
151
+ If you have questions or run into migration issues, feel free to ask them on [Discord](https://discord.gg/s3KuuzsPFb)
docs/source/bring_your_own_policies.mdx ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Bring Your Own Policies
2
+
3
+ This tutorial explains how to integrate your own custom policy implementations into the LeRobot ecosystem, allowing you to leverage all LeRobot tools for training, evaluation, and deployment while using your own algorithms.
4
+
5
+ ## Step 1: Create a Policy Package
6
+
7
+ Your custom policy should be organized as an installable Python package following LeRobot's plugin conventions.
8
+
9
+ ### Package Structure
10
+
11
+ Create a package with the prefix `lerobot_policy_` (IMPORTANT!) followed by your policy name:
12
+
13
+ ```bash
14
+ lerobot_policy_my_custom_policy/
15
+ ├── pyproject.toml
16
+ └── src/
17
+ └── lerobot_policy_my_custom_policy/
18
+ ├── __init__.py
19
+ ├── configuration_my_custom_policy.py
20
+ ├── modeling_my_custom_policy.py
21
+ └── processor_my_custom_policy.py
22
+ ```
23
+
24
+ ### Package Configuration
25
+
26
+ Set up your `pyproject.toml`:
27
+
28
+ ```toml
29
+ [project]
30
+ name = "lerobot_policy_my_custom_policy"
31
+ version = "0.1.0"
32
+ dependencies = [
33
+ # your policy-specific dependencies
34
+ ]
35
+ requires-python = ">= 3.11"
36
+
37
+ [build-system]
38
+ build-backend = # your-build-backend
39
+ requires = # your-build-system
40
+ ```
41
+
42
+ ## Step 2: Define the Policy Configuration
43
+
44
+ Create a configuration class that inherits from `PreTrainedConfig` and registers your policy type:
45
+
46
+ ```python
47
+ # configuration_my_custom_policy.py
48
+ from dataclasses import dataclass, field
49
+ from lerobot.configs.policies import PreTrainedConfig
50
+ from lerobot.configs.types import NormalizationMode
51
+
52
+ @PreTrainedConfig.register_subclass("my_custom_policy")
53
+ @dataclass
54
+ class MyCustomPolicyConfig(PreTrainedConfig):
55
+ """Configuration class for MyCustomPolicy.
56
+
57
+ Args:
58
+ n_obs_steps: Number of observation steps to use as input
59
+ horizon: Action prediction horizon
60
+ n_action_steps: Number of action steps to execute
61
+ hidden_dim: Hidden dimension for the policy network
62
+ # Add your policy-specific parameters here
63
+ """
64
+ # ...PreTrainedConfig fields...
65
+ pass
66
+
67
+ def __post_init__(self):
68
+ super().__post_init__()
69
+ # Add any validation logic here
70
+
71
+ def validate_features(self) -> None:
72
+ """Validate input/output feature compatibility."""
73
+ # Implement validation logic for your policy's requirements
74
+ pass
75
+ ```
76
+
77
+ ## Step 3: Implement the Policy Class
78
+
79
+ Create your policy implementation by inheriting from LeRobot's base `PreTrainedPolicy` class:
80
+
81
+ ```python
82
+ # modeling_my_custom_policy.py
83
+ import torch
84
+ import torch.nn as nn
85
+ from typing import Dict, Any
86
+
87
+ from lerobot.policies.pretrained import PreTrainedPolicy
88
+ from .configuration_my_custom_policy import MyCustomPolicyConfig
89
+
90
+ class MyCustomPolicy(PreTrainedPolicy):
91
+ config_class = MyCustomPolicyConfig
92
+ name = "my_custom_policy"
93
+
94
+ def __init__(self, config: MyCustomPolicyConfig, dataset_stats: Dict[str, Any] = None):
95
+ super().__init__(config, dataset_stats)
96
+ ...
97
+ ```
98
+
99
+ ## Step 4: Add Data Processors
100
+
101
+ Create processor functions:
102
+
103
+ ```python
104
+ # processor_my_custom_policy.py
105
+ from typing import Dict, Any
106
+ import torch
107
+
108
+
109
+ def make_my_custom_policy_pre_post_processors(
110
+ config,
111
+ ) -> tuple[
112
+ PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
113
+ PolicyProcessorPipeline[PolicyAction, PolicyAction],
114
+ ]:
115
+ """Create preprocessing and postprocessing functions for your policy."""
116
+ pass # Define your preprocessing and postprocessing logic here
117
+
118
+ ```
119
+
120
+ ## Step 5: Package Initialization
121
+
122
+ Expose your classes in the package's `__init__.py`:
123
+
124
+ ```python
125
+ # __init__.py
126
+ """Custom policy package for LeRobot."""
127
+
128
+ try:
129
+ import lerobot # noqa: F401
130
+ except ImportError:
131
+ raise ImportError(
132
+ "lerobot is not installed. Please install lerobot to use this policy package."
133
+ )
134
+
135
+ from .configuration_my_custom_policy import MyCustomPolicyConfig
136
+ from .modeling_my_custom_policy import MyCustomPolicy
137
+ from .processor_my_custom_policy import make_my_custom_policy_pre_post_processors
138
+
139
+ __all__ = [
140
+ "MyCustomPolicyConfig",
141
+ "MyCustomPolicy",
142
+ "make_my_custom_policy_pre_post_processors",
143
+ ]
144
+ ```
145
+
146
+ ## Step 6: Installation and Usage
147
+
148
+ ### Install Your Policy Package
149
+
150
+ ```bash
151
+ cd lerobot_policy_my_custom_policy
152
+ pip install -e .
153
+
154
+ # Or install from PyPI if published
155
+ pip install lerobot_policy_my_custom_policy
156
+ ```
157
+
158
+ ### Use Your Policy
159
+
160
+ Once installed, your policy automatically integrates with LeRobot's training and evaluation tools:
161
+
162
+ ```bash
163
+ lerobot-train \
164
+ --policy.type my_custom_policy \
165
+ --env.type pusht \
166
+ --steps 200000
167
+ ```
168
+
169
+ ## Examples and Community Contributions
170
+
171
+ Check out these example policy implementations:
172
+
173
+ - [DiTFlow Policy](https://github.com/danielsanjosepro/lerobot_policy_ditflow) - Diffusion Transformer policy with flow-matching objective. Try it out in this example: [DiTFlow Example](https://github.com/danielsanjosepro/test_lerobot_policy_ditflow)
174
+
175
+ Share your policy implementations with the community! 🤗
docs/source/cameras.mdx ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cameras
2
+
3
+ LeRobot offers multiple options for video capture:
4
+
5
+ | Class | Supported Cameras |
6
+ | ----------------- | ----------------------------------- |
7
+ | `OpenCVCamera` | Phone, built-in laptop, USB webcams |
8
+ | `ZMQCamera` | Network-connected cameras |
9
+ | `RealSenseCamera` | Intel RealSense (with depth) |
10
+ | `Reachy2Camera` | Reachy 2 robot cameras |
11
+
12
+ > [!TIP]
13
+ > For `OpenCVCamera` compatibility details, see the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
14
+
15
+ ### Find your camera
16
+
17
+ Every camera requires a unique identifier to be instantiated, allowing you to distinguish between multiple connected devices.
18
+
19
+ `OpenCVCamera` and `RealSenseCamera` support auto-discovery. Run the command below to list available devices and their identifiers. Note that these identifiers may change after rebooting your computer or re-plugging the camera, depending on your operating system.
20
+
21
+ ```bash
22
+ lerobot-find-cameras opencv # or realsense for Intel Realsense cameras
23
+ ```
24
+
25
+ The output will look something like this if you have two cameras connected:
26
+
27
+ ```bash
28
+ --- Detected Cameras ---
29
+ Camera #0:
30
+ Name: OpenCV Camera @ 0
31
+ Type: OpenCV
32
+ Id: 0
33
+ Backend api: AVFOUNDATION
34
+ Default stream profile:
35
+ Format: 16.0
36
+ Width: 1920
37
+ Height: 1080
38
+ Fps: 15.0
39
+ --------------------
40
+ (more cameras ...)
41
+ ```
42
+
43
+ > [!WARNING]
44
+ > When using Intel RealSense cameras in `macOS`, you could get this [error](https://github.com/IntelRealSense/librealsense/issues/12307): `Error finding RealSense cameras: failed to set power state`, this can be solved by running the same command with `sudo` permissions. Note that using RealSense cameras in `macOS` is unstable.
45
+
46
+ `ZMQCamera` and `Reachy2Camera` do not support auto-discovery. They must be configured manually by providing their network address and port or robot SDK settings.
47
+
48
+ ## Use cameras
49
+
50
+ ### Frame access modes
51
+
52
+ All camera classes implement three access modes for capturing frames:
53
+
54
+ | Method | Behavior | Blocks? | Best For |
55
+ | ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- | ---------------------------------------- |
56
+ | `read()` | Waits for the camera hardware to return a frame. May block for a long time depending on the camera and SDK. | Yes | Simple scripts, sequential capture |
57
+ | `async_read(timeout_ms)` | Returns the latest unconsumed frame from background thread. Blocks only if buffer is empty, up to `timeout_ms`. Raises `TimeoutError` if no frame arrives. | With a timeout | Control loops synchronized to camera FPS |
58
+ | `read_latest(max_age_ms)` | Peeks at the most recent frame in buffer (may be stale). Raises `TimeoutError` if frame is older than `max_age_ms`. | No | UI visualization, logging, monitoring |
59
+
60
+ ### Usage examples
61
+
62
+ The following examples show how to use the camera API to configure and capture frames from different camera types.
63
+
64
+ - **Blocking and non-blocking frame capture** using an OpenCV-based camera
65
+ - **Color and depth capture** using an Intel RealSense camera
66
+
67
+ > [!WARNING]
68
+ > Failing to cleanly disconnect cameras can cause resource leaks. Use the context manager protocol to ensure automatic cleanup:
69
+ >
70
+ > ```python
71
+ > with OpenCVCamera(config) as camera:
72
+ > ...
73
+ > ```
74
+ >
75
+ > You can also call `connect()` and `disconnect()` manually, but always use a `finally` block for the latter.
76
+
77
+ <hfoptions id="shell_restart">
78
+ <hfoption id="Open CV Camera">
79
+
80
+ <!-- prettier-ignore-start -->
81
+ ```python
82
+ from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
83
+ from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
84
+ from lerobot.cameras.configs import ColorMode, Cv2Rotation
85
+
86
+ # Construct an `OpenCVCameraConfig` with your desired FPS, resolution, color mode, and rotation.
87
+ config = OpenCVCameraConfig(
88
+ index_or_path=0,
89
+ fps=15,
90
+ width=1920,
91
+ height=1080,
92
+ color_mode=ColorMode.RGB,
93
+ rotation=Cv2Rotation.NO_ROTATION
94
+ )
95
+
96
+ # Instantiate and connect an `OpenCVCamera`, performing a warm-up read (default).
97
+ with OpenCVCamera(config) as camera:
98
+
99
+ # Read a frame synchronously — blocks until hardware delivers a new frame
100
+ frame = camera.read()
101
+ print(f"read() call returned frame with shape:", frame.shape)
102
+
103
+ # Read a frame asynchronously with a timeout — returns the latest unconsumed frame or waits up to timeout_ms for a new one
104
+ try:
105
+ for i in range(10):
106
+ frame = camera.async_read(timeout_ms=200)
107
+ print(f"async_read call returned frame {i} with shape:", frame.shape)
108
+ except TimeoutError as e:
109
+ print(f"No frame received within timeout: {e}")
110
+
111
+ # Instantly return a frame - returns the most recent frame captured by the camera
112
+ try:
113
+ initial_frame = camera.read_latest(max_age_ms=1000)
114
+ for i in range(10):
115
+ frame = camera.read_latest(max_age_ms=1000)
116
+ print(f"read_latest call returned frame {i} with shape:", frame.shape)
117
+ print(f"Was a new frame received by the camera? {not (initial_frame == frame).any()}")
118
+ except TimeoutError as e:
119
+ print(f"Frame too old: {e}")
120
+
121
+ ```
122
+ <!-- prettier-ignore-end -->
123
+
124
+ </hfoption>
125
+ <hfoption id="Intel Realsense Camera">
126
+
127
+ <!-- prettier-ignore-start -->
128
+ ```python
129
+ from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig
130
+ from lerobot.cameras.realsense.camera_realsense import RealSenseCamera
131
+ from lerobot.cameras.configs import ColorMode, Cv2Rotation
132
+
133
+ # Create a `RealSenseCameraConfig` specifying your camera’s serial number and enabling depth.
134
+ config = RealSenseCameraConfig(
135
+ serial_number_or_name="233522074606",
136
+ fps=15,
137
+ width=640,
138
+ height=480,
139
+ color_mode=ColorMode.RGB,
140
+ use_depth=True,
141
+ rotation=Cv2Rotation.NO_ROTATION
142
+ )
143
+
144
+ # Instantiate and connect a `RealSenseCamera` with warm-up read (default).
145
+ camera = RealSenseCamera(config)
146
+ camera.connect()
147
+
148
+ # Capture a color frame via `read()` and a depth map via `read_depth()`.
149
+ try:
150
+ color_frame = camera.read()
151
+ depth_map = camera.read_depth()
152
+ print("Color frame shape:", color_frame.shape)
153
+ print("Depth map shape:", depth_map.shape)
154
+ finally:
155
+ camera.disconnect()
156
+ ```
157
+ <!-- prettier-ignore-end -->
158
+
159
+ </hfoption>
160
+ </hfoptions>
161
+
162
+ ## Use your phone's camera
163
+
164
+ <hfoptions id="use phone">
165
+ <hfoption id="iPhone & macOS">
166
+
167
+ To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
168
+
169
+ - Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
170
+ - Sign in both devices with the same Apple ID.
171
+ - Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
172
+
173
+ For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
174
+
175
+ </hfoption>
176
+ <hfoption id="OBS virtual camera">
177
+
178
+ If you want to use your phone as a camera using OBS, follow these steps to set up a virtual camera.
179
+
180
+ 1. _(Linux only) Install `v4l2loopback-dkms` and `v4l-utils`_. These packages create virtual camera devices and verify their settings. Install with:
181
+
182
+ ```bash
183
+ sudo apt install v4l2loopback-dkms v4l-utils
184
+ ```
185
+
186
+ 2. _Install the [DroidCam app](https://droidcam.app) on your phone_. This app is available for both iOS and Android.
187
+ 3. _Download and install [OBS Studio](https://obsproject.com)_.
188
+ 4. _Download and install the [DroidCam OBS plugin](https://droidcam.app/obs)_.
189
+ 5. _Start OBS Studio_.
190
+
191
+ 6. _Add your phone as a source_. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480` to avoid the watermarks.
192
+ 7. _Adjust resolution settings_. In OBS Studio, go to `File > Settings > Video` or `OBS > Preferences... > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it.
193
+ 8. _Start virtual camera_. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
194
+ 9. _Verify the virtual camera setup and resolution_.
195
+ - **Linux**: Use `v4l2-ctl` to list devices and check resolution:
196
+ ```bash
197
+ v4l2-ctl --list-devices # find VirtualCam and note its /dev/videoX path
198
+ v4l2-ctl -d /dev/videoX --get-fmt-video # replace with your VirtualCam path
199
+ ```
200
+ You should see `VirtualCam` listed and resolution `640x480`.
201
+ - **macOS**: Open Photo Booth or FaceTime and select "OBS Virtual Camera" as the input.
202
+ - **Windows**: The native Camera app doesn't support virtual cameras. Use a video conferencing app (Zoom, Teams) or run `lerobot-find-cameras opencv` directly to verify.
203
+
204
+ <details>
205
+ <summary><strong>Troubleshooting</strong></summary>
206
+
207
+ > The virtual camera resolution is incorrect.
208
+
209
+ Delete the virtual camera source and recreate it. The resolution cannot be changed after creation.
210
+
211
+ > Error reading frame in background thread for OpenCVCamera(X): OpenCVCamera(X) frame width=640 or height=480 do not match configured width=1920 or height=1080.
212
+
213
+ This error is caused by OBS Virtual Camera advertising a `1920x1080` resolution despite rescaling. The only fix for now is to comment out the width and height check in `_postprocess_image()`.
214
+
215
+ </details>
216
+
217
+ </hfoption>
218
+ </hfoptions>
219
+
220
+ If everything is set up correctly, your phone will appear as a standard OpenCV camera and can be used with `OpenCVCamera`.
docs/source/contributing.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to contribute to 🤗 LeRobot
2
+
3
+ Everyone is welcome to contribute, and we value everybody's contribution. Code is not the only way to help the community. Answering questions, helping others, reaching out, and improving the documentation are immensely valuable.
4
+
5
+ Whichever way you choose to contribute, please be mindful to respect our [code of conduct](./CODE_OF_CONDUCT.md).
6
+
7
+ ## Ways to Contribute
8
+
9
+ You can contribute in many ways:
10
+
11
+ - **Fixing issues:** Resolve bugs or improve existing code.
12
+ - **New features:** Develop new features.
13
+ - **Extend:** Implement new models/policies, robots, or simulation environments and upload datasets to the Hugging Face Hub.
14
+ - **Documentation:** Improve examples, guides, and docstrings.
15
+ - **Feedback:** Submit tickets related to bugs or desired new features.
16
+
17
+ If you are unsure where to start, join our [Discord Channel](https://discord.gg/q8Dzzpym3f).
18
+
19
+ ## Development Setup
20
+
21
+ To contribute code, you need to set up a development environment.
22
+
23
+ ### 1. Fork and Clone
24
+
25
+ Fork the repository on GitHub, then clone your fork:
26
+
27
+ ```bash
28
+ git clone https://github.com/<your-handle>/lerobot.git
29
+ cd lerobot
30
+ git remote add upstream https://github.com/huggingface/lerobot.git
31
+ ```
32
+
33
+ ### 2. Environment Installation
34
+
35
+ Please follow our [Installation Guide](./docs/source/installation.mdx) for the environment setup & installation from source.
36
+
37
+ ## Running Tests & Quality Checks
38
+
39
+ ### Code Style (Pre-commit)
40
+
41
+ Install `pre-commit` hooks to run checks automatically before you commit:
42
+
43
+ ```bash
44
+ pre-commit install
45
+ ```
46
+
47
+ To run checks manually on all files:
48
+
49
+ ```bash
50
+ pre-commit run --all-files
51
+ ```
52
+
53
+ ### Running Tests
54
+
55
+ We use `pytest`. First, ensure you have test artifacts by installing **git-lfs**:
56
+
57
+ ```bash
58
+ git lfs install
59
+ git lfs pull
60
+ ```
61
+
62
+ Run the full suite (this may require extras installed):
63
+
64
+ ```bash
65
+ pytest -sv ./tests
66
+ ```
67
+
68
+ Or run a specific test file during development:
69
+
70
+ ```bash
71
+ pytest -sv tests/test_specific_feature.py
72
+ ```
73
+
74
+ ## Submitting Issues & Pull Requests
75
+
76
+ Use the templates for required fields and examples.
77
+
78
+ - **Issues:** Follow the [ticket template](./.github/ISSUE_TEMPLATE/bug-report.yml).
79
+ - **Pull requests:** Rebase on `upstream/main`, use a descriptive branch (don't work on `main`), run `pre-commit` and tests locally, and follow the [PR template](./.github/PULL_REQUEST_TEMPLATE.md).
80
+
81
+ One member of the LeRobot team will then review your contribution.
82
+
83
+ Thank you for contributing to LeRobot!
docs/source/damiao.mdx ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Damiao Motors and CAN Bus
2
+
3
+ This guide covers setup and usage of Damiao motors with LeRobot via CAN bus communication.
4
+
5
+ Currently, only Linux is supported, as the OpenArms CAN adapter only has drivers for Linux.
6
+
7
+ ## Linux CAN Setup
8
+
9
+ Before using Damiao motors, you need to set up the CAN interface on your Linux system.
10
+
11
+ ### Install CAN Utilities
12
+
13
+ ```bash
14
+ sudo apt-get install can-utils
15
+ ```
16
+
17
+ ### Configure CAN Interface (Manual)
18
+
19
+ For standard CAN FD (recommended for OpenArms):
20
+
21
+ ```bash
22
+ sudo ip link set can0 down
23
+ sudo ip link set can0 type can bitrate 1000000 dbitrate 5000000 fd on
24
+ sudo ip link set can0 up
25
+ ```
26
+
27
+ For standard CAN (without FD):
28
+
29
+ ```bash
30
+ sudo ip link set can0 down
31
+ sudo ip link set can0 type can bitrate 1000000
32
+ sudo ip link set can0 up
33
+ ```
34
+
35
+ ### Configure CAN Interface (Using LeRobot)
36
+
37
+ LeRobot provides a utility script to setup and test CAN interfaces:
38
+
39
+ ```bash
40
+ # Setup multiple interfaces (e.g., OpenArms Followers with 2 CAN buses)
41
+ lerobot-setup-can --mode=setup --interfaces=can0,can1
42
+ ```
43
+
44
+ ## Debugging CAN Communication
45
+
46
+ Use the built-in debug tools to test motor communication:
47
+
48
+ ```bash
49
+ # Test motors on all interfaces
50
+ lerobot-setup-can --mode=test --interfaces=can0,can1
51
+
52
+ # Run speed/latency test
53
+ lerobot-setup-can --mode=speed --interfaces=can0
54
+ ```
55
+
56
+ The test mode will scan for motors (IDs 0x01-0x08) and report which ones respond. Example output:
57
+
58
+ ```
59
+ can0: UP (CAN FD)
60
+ Motor 0x01 (joint_1): ✓ FOUND
61
+ → Response 0x11 [FD]: 00112233...
62
+ Motor 0x02 (joint_2): ✓ FOUND
63
+ Motor 0x03 (joint_3): ✗ No response
64
+ ...
65
+ Summary: 2/8 motors found
66
+ ```
67
+
68
+ ## Usage
69
+
70
+ ### Basic Setup
71
+
72
+ ```python
73
+ from lerobot.motors import Motor
74
+ from lerobot.motors.damiao import DamiaoMotorsBus
75
+
76
+ # Define your motors with send/receive CAN IDs
77
+ motors = {
78
+ "joint_1": Motor(id=0x01, motor_type_str="dm8009", recv_id=0x11),
79
+ "joint_2": Motor(id=0x02, motor_type_str="dm4340", recv_id=0x12),
80
+ "joint_3": Motor(id=0x03, motor_type_str="dm4310", recv_id=0x13),
81
+ }
82
+
83
+ # Create the bus
84
+ bus = DamiaoMotorsBus(
85
+ port="can0", # Linux socketcan interface
86
+ motors=motors,
87
+ )
88
+
89
+ # Connect
90
+ bus.connect()
91
+ ```
92
+
93
+ ### Reading Motor States
94
+
95
+ ```python
96
+ # Read single motor position (degrees)
97
+ position = bus.read("Present_Position", "joint_1")
98
+
99
+ # Read from multiple motors
100
+ positions = bus.sync_read("Present_Position") # All motors
101
+ positions = bus.sync_read("Present_Position", ["joint_1", "joint_2"])
102
+
103
+ # Read all states at once (position, velocity, torque)
104
+ states = bus.sync_read_all_states()
105
+ # Returns: {'joint_1': {'position': 45.2, 'velocity': 1.3, 'torque': 0.5}, ...}
106
+ ```
107
+
108
+ ### Writing Motor Commands
109
+
110
+ ```python
111
+ # Enable torque
112
+ bus.enable_torque()
113
+
114
+ # Set goal position (degrees)
115
+ bus.write("Goal_Position", "joint_1", 45.0)
116
+
117
+ # Set positions for multiple motors
118
+ bus.sync_write("Goal_Position", {
119
+ "joint_1": 45.0,
120
+ "joint_2": -30.0,
121
+ "joint_3": 90.0,
122
+ })
123
+
124
+ # Disable torque
125
+ bus.disable_torque()
126
+ ```
127
+
128
+ ## Configuration Options
129
+
130
+ | Parameter | Default | Description |
131
+ | -------------- | --------- | ----------------------------------------------------------- |
132
+ | `port` | - | CAN interface (`can0`) or serial port (`/dev/cu.usbmodem*`) |
133
+ | `use_can_fd` | `True` | Enable CAN FD for higher data rates |
134
+ | `bitrate` | `1000000` | Nominal bitrate (1 Mbps) |
135
+ | `data_bitrate` | `5000000` | CAN FD data bitrate (5 Mbps) |
136
+
137
+ ## Motor Configuration
138
+
139
+ Each motor requires:
140
+
141
+ - `id`: CAN ID for sending commands
142
+ - `motor_type`: One of the supported motor types (e.g., `"dm8009"`, `"dm4340"`)
143
+ - `recv_id`: CAN ID for receiving responses
144
+
145
+ OpenArms default IDs follow the pattern: send ID `0x0N`, receive ID `0x1N` where N is the joint number.
146
+
147
+ ## Troubleshooting
148
+
149
+ ### No Response from Motors
150
+
151
+ 1. **Check power**
152
+ 2. **Verify CAN wiring**: Check CAN-H, CAN-L, and GND connections
153
+ 3. **Check motor IDs**: Use Damiao Debugging Tools to verify/configure IDs
154
+ 4. **Test CAN interface**: Run `candump can0` to see if messages are being received
155
+ 5. **Run diagnostics**: `lerobot-setup-can --mode=test --interfaces=can0`
156
+
157
+ ### Motor Timeout Parameter
158
+
159
+ If motors were configured with timeout=0, they won't respond to commands. Use Damiao Debugging Tools to set a non-zero timeout value.
160
+
161
+ ### Verify CAN FD Status
162
+
163
+ ```bash
164
+ ip -d link show can0 | grep fd
165
+ ```
docs/source/dataset_subtask.mdx ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Using Subtasks in LeRobot Datasets
2
+
3
+ Subtask support in robotics datasets has proven effective in improving robot reasoning and understanding. Subtasks are particularly useful for:
4
+
5
+ - **Hierarchical policies**: Building policies that include subtask predictions to visualize robot reasoning in real time
6
+ - **Reward modeling**: Helping reward models understand task progression (e.g., SARM-style stage-aware reward models)
7
+ - **Task decomposition**: Breaking down complex manipulation tasks into atomic, interpretable steps
8
+
9
+ LeRobotDataset now supports subtasks as part of its dataset structure, alongside tasks.
10
+
11
+ ## What are Subtasks?
12
+
13
+ While a **task** describes the overall goal (e.g., "Pick up the apple and place it in the basket"), **subtasks** break down the execution into finer-grained steps:
14
+
15
+ 1. "Approach the apple"
16
+ 2. "Grasp the apple"
17
+ 3. "Lift the apple"
18
+ 4. "Move to basket"
19
+ 5. "Release the apple"
20
+
21
+ Each frame in the dataset can be annotated with its corresponding subtask, enabling models to learn and predict these intermediate stages.
22
+
23
+ <img
24
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/subtask-asset.png"
25
+ alt="An overview of subtask annotation showing how frames are labeled with intermediate subtask stages"
26
+ width="80%"
27
+ />
28
+
29
+ <p>
30
+ <em>Figure: Overview of subtask annotation.</em>
31
+ </p>
32
+
33
+ **Reference:** _Subtask-learning based for robot self-assembly in flexible collaborative assembly in manufacturing_, Original Article, Published: 19 April 2022.
34
+
35
+ ## Dataset Structure
36
+
37
+ Subtask information is stored in the dataset metadata:
38
+
39
+ ```
40
+ my-dataset/
41
+ ├── data/
42
+ │ └── ...
43
+ ├── meta/
44
+ │ ├── info.json
45
+ │ ├── stats.json
46
+ │ ├── tasks.parquet
47
+ │ ├── subtasks.parquet # Subtask index → subtask string mapping
48
+ │ └── episodes/
49
+ │ └── ...
50
+ └── videos/
51
+ └── ...
52
+ ```
53
+
54
+ ### Subtasks Parquet File
55
+
56
+ The `meta/subtasks.parquet` file maps subtask indices to their natural language descriptions:
57
+
58
+ | subtask_index | subtask (index column) |
59
+ | ------------- | ---------------------- |
60
+ | 0 | "Approach the apple" |
61
+ | 1 | "Grasp the apple" |
62
+ | 2 | "Lift the apple" |
63
+ | ... | ... |
64
+
65
+ ### Frame-Level Annotations
66
+
67
+ Each frame in the dataset can include a `subtask_index` field that references the subtasks parquet file:
68
+
69
+ ```python
70
+ # Example frame data in the parquet file
71
+ {
72
+ "index": 42,
73
+ "timestamp": 1.4,
74
+ "episode_index": 0,
75
+ "task_index": 0,
76
+ "subtask_index": 2, # References "Lift the apple"
77
+ "observation.state": [...],
78
+ "action": [...],
79
+ }
80
+ ```
81
+
82
+ ## Annotating Datasets with Subtasks
83
+
84
+ We provide a HuggingFace Space for easily annotating any LeRobotDataset with subtasks:
85
+
86
+ **[https://huggingface.co/spaces/lerobot/annotate](https://huggingface.co/spaces/lerobot/annotate)**
87
+
88
+ After completing your annotation:
89
+
90
+ 1. Click "Push to Hub" to upload your annotated dataset
91
+ 2. You can also run the annotation space locally by following the instructions at [github.com/huggingface/lerobot-annotate](https://github.com/huggingface/lerobot-annotate)
92
+
93
+ ## Loading Datasets with Subtasks
94
+
95
+ When you load a dataset with subtask annotations, the subtask information is automatically available:
96
+
97
+ ```python
98
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
99
+
100
+ # Load a dataset with subtask annotations
101
+ dataset = LeRobotDataset("jadechoghari/collect-fruit-annotated")
102
+
103
+ # Access a sample
104
+ sample = dataset[100]
105
+
106
+ # The sample includes both task and subtask information
107
+ print(sample["task"]) # "Collect the fruit"
108
+ print(sample["subtask"]) # "Grasp the apple"
109
+ print(sample["task_index"]) # tensor(0)
110
+ print(sample["subtask_index"]) # tensor(2)
111
+ ```
112
+
113
+ ### Checking for Subtask Support
114
+
115
+ You can check if a dataset has subtask annotations:
116
+
117
+ ```python
118
+ # Check if subtasks are available
119
+ has_subtasks = (
120
+ "subtask_index" in dataset.features
121
+ and dataset.meta.subtasks is not None
122
+ )
123
+
124
+ if has_subtasks:
125
+ print(f"Dataset has {len(dataset.meta.subtasks)} unique subtasks")
126
+ print("Subtasks:", list(dataset.meta.subtasks.index))
127
+ ```
128
+
129
+ ## Using Subtasks for Training
130
+
131
+ ### With the Tokenizer Processor
132
+
133
+ The `TokenizerProcessor` automatically handles subtask tokenization for Vision-Language Action (VLA) models:
134
+
135
+ ```python
136
+ from lerobot.processor.tokenizer_processor import TokenizerProcessor
137
+ from lerobot.processor.pipeline import ProcessorPipeline
138
+
139
+ # Create a tokenizer processor
140
+ tokenizer_processor = TokenizerProcessor(
141
+ tokenizer_name_or_path="google/paligemma-3b-pt-224",
142
+ padding="max_length",
143
+ max_length=64,
144
+ )
145
+
146
+ # The processor will automatically tokenize subtasks if present in the batch
147
+ # and add them to the observation under:
148
+ # - "observation.subtask.tokens"
149
+ # - "observation.subtask.attention_mask"
150
+ ```
151
+
152
+ When subtasks are available in the batch, the tokenizer processor adds:
153
+
154
+ - `observation.subtask.tokens`: Tokenized subtask text
155
+ - `observation.subtask.attention_mask`: Attention mask for the subtask tokens
156
+
157
+ ### DataLoader with Subtasks
158
+
159
+ ```python
160
+ import torch
161
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
162
+
163
+ dataset = LeRobotDataset("jadechoghari/collect-fruit-annotated")
164
+
165
+ dataloader = torch.utils.data.DataLoader(
166
+ dataset,
167
+ batch_size=16,
168
+ shuffle=True,
169
+ )
170
+
171
+ for batch in dataloader:
172
+ # Access subtask information in the batch
173
+ subtasks = batch["subtask"] # List of subtask strings
174
+ subtask_indices = batch["subtask_index"] # Tensor of subtask indices
175
+
176
+ # Use for training hierarchical policies or reward models
177
+ print(f"Batch subtasks: {set(subtasks)}")
178
+ ```
179
+
180
+ ## Example Datasets with Subtask Annotations
181
+
182
+ Try loading a dataset with subtask annotations:
183
+
184
+ ```python
185
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
186
+
187
+ # Example dataset with subtask annotations
188
+ dataset = LeRobotDataset("jadechoghari/collect-fruit-annotated")
189
+
190
+ # Explore the subtasks
191
+ print("Available subtasks:")
192
+ for subtask_name in dataset.meta.subtasks.index:
193
+ print(f" - {subtask_name}")
194
+
195
+ # Get subtask distribution
196
+ subtask_counts = {}
197
+ for i in range(len(dataset)):
198
+ sample = dataset[i]
199
+ subtask = sample["subtask"]
200
+ subtask_counts[subtask] = subtask_counts.get(subtask, 0) + 1
201
+
202
+ print("\nSubtask distribution:")
203
+ for subtask, count in sorted(subtask_counts.items(), key=lambda x: -x[1]):
204
+ print(f" {subtask}: {count} frames")
205
+ ```
206
+
207
+ ## Use Cases
208
+
209
+ ### 1. Hierarchical Policy Training
210
+
211
+ Train policies that predict both actions and current subtask:
212
+
213
+ ```python
214
+ class HierarchicalPolicy(nn.Module):
215
+ def __init__(self, num_subtasks):
216
+ super().__init__()
217
+ self.action_head = nn.Linear(hidden_dim, action_dim)
218
+ self.subtask_head = nn.Linear(hidden_dim, num_subtasks)
219
+
220
+ def forward(self, observations):
221
+ features = self.encoder(observations)
222
+ actions = self.action_head(features)
223
+ subtask_logits = self.subtask_head(features)
224
+ return actions, subtask_logits
225
+ ```
226
+
227
+ ### 2. Stage-Aware Reward Modeling (SARM)
228
+
229
+ Build reward models that understand task progression:
230
+
231
+ ```python
232
+ # SARM predicts:
233
+ # - Stage: Which subtask is being executed (discrete)
234
+ # - Progress: How far along the subtask (continuous 0-1)
235
+
236
+ class SARMRewardModel(nn.Module):
237
+ def forward(self, observations):
238
+ features = self.encoder(observations)
239
+ stage_logits = self.stage_classifier(features)
240
+ progress = self.progress_regressor(features)
241
+ return stage_logits, progress
242
+ ```
243
+
244
+ ### 3. Progress Visualization
245
+
246
+ Monitor robot execution by tracking subtask progression:
247
+
248
+ ```python
249
+ def visualize_execution(model, observations):
250
+ for t, obs in enumerate(observations):
251
+ action, subtask_logits = model(obs)
252
+ predicted_subtask = subtask_names[subtask_logits.argmax()]
253
+ print(f"t={t}: Executing '{predicted_subtask}'")
254
+ ```
255
+
256
+ ## API Reference
257
+
258
+ ### LeRobotDataset Properties
259
+
260
+ | Property | Type | Description |
261
+ | --------------------------- | ---------------------- | ------------------------------------------ |
262
+ | `meta.subtasks` | `pd.DataFrame \| None` | DataFrame mapping subtask names to indices |
263
+ | `features["subtask_index"]` | `dict` | Feature spec for subtask_index if present |
264
+
265
+ ### Sample Keys
266
+
267
+ When subtasks are available, each sample includes:
268
+
269
+ | Key | Type | Description |
270
+ | --------------- | -------------- | ------------------------------------ |
271
+ | `subtask_index` | `torch.Tensor` | Integer index of the current subtask |
272
+ | `subtask` | `str` | Natural language subtask description |
273
+
274
+ ## Related Resources
275
+
276
+ - [SARM Paper](https://arxiv.org/pdf/2509.25358) - Stage-Aware Reward Modeling for Long Horizon Robot Manipulation
277
+ - [LeRobot Annotate Space](https://huggingface.co/spaces/lerobot/annotate) - Interactive annotation tool
278
+ - [LeRobotDataset v3.0](./lerobot-dataset-v3) - Dataset format documentation
docs/source/debug_processor_pipeline.mdx ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Debug Your Processor Pipeline
2
+
3
+ Processor pipelines can be complex, especially when chaining multiple transformation steps.
4
+ Unlike simple function calls, pipelines lack natural observability, you can't easily see what happens
5
+ between each step or where things go wrong.
6
+ This guide provides debugging tools and techniques specifically designed to address these challenges
7
+ and help you understand data flow through your pipelines.
8
+
9
+ We'll explore three complementary debugging approaches: **hooks** for runtime monitoring, **step-through debugging** for detailed inspection, and **feature validation** for catching structural mismatches. Each serves a different purpose and together they provide complete visibility into your pipeline's behavior.
10
+
11
+ ## Understanding Hooks
12
+
13
+ Hooks are functions that get called at specific points during pipeline execution.
14
+ They provide a way to inspect, monitor, or modify data without changing your pipeline code.
15
+ Think of them as "event listeners" for your pipeline.
16
+
17
+ ### What is a Hook?
18
+
19
+ A hook is a callback function that gets automatically invoked at specific moments during pipeline execution.
20
+ The concept comes from event-driven programming, imagine you could "hook into" the pipeline's execution flow to observe or react to what's happening.
21
+
22
+ Think of hooks like inserting checkpoints into your pipeline. Every time the pipeline reaches one of these checkpoints, it pauses briefly to call your hook function, giving you a chance to inspect the current state, log information, and validate data.
23
+
24
+ A hook is simply a function that accepts two parameters:
25
+
26
+ - `step_idx: int` - The index of the current processing step (0, 1, 2, etc.)
27
+ - `transition: EnvTransition` - The data transition at that point in the pipeline
28
+
29
+ The beauty of hooks is their non-invasive nature: you can add monitoring, validation, or debugging logic without changing a single line of your pipeline code. The pipeline remains clean and focused on its core logic, while hooks handle the cross-cutting concerns like logging, monitoring, and debugging.
30
+
31
+ ### Before vs After Hooks
32
+
33
+ The pipeline supports two types of hooks:
34
+
35
+ - **Before hooks** (`register_before_step_hook`) - Called before each step executes
36
+ - **After hooks** (`register_after_step_hook`) - Called after each step completes
37
+
38
+ ```python
39
+ def before_hook(step_idx: int, transition: EnvTransition):
40
+ """Called before step processes the transition."""
41
+ print(f"About to execute step {step_idx}")
42
+ # Useful for: logging, validation, setup
43
+
44
+ def after_hook(step_idx: int, transition: EnvTransition):
45
+ """Called after step has processed the transition."""
46
+ print(f"Completed step {step_idx}")
47
+ # Useful for: monitoring results, cleanup, debugging
48
+
49
+ processor.register_before_step_hook(before_hook)
50
+ processor.register_after_step_hook(after_hook)
51
+ ```
52
+
53
+ ### Implementing a NaN Detection Hook
54
+
55
+ Here's a practical example of a hook that detects NaN values:
56
+
57
+ ```python
58
+ def check_nans(step_idx: int, transition: EnvTransition):
59
+ """Check for NaN values in observations."""
60
+ obs = transition.get(TransitionKey.OBSERVATION)
61
+ if obs:
62
+ for key, value in obs.items():
63
+ if isinstance(value, torch.Tensor) and torch.isnan(value).any():
64
+ print(f"NaN detected in {key} at step {step_idx}")
65
+
66
+ # Register the hook to run after each step
67
+ processor.register_after_step_hook(check_nans)
68
+
69
+ # Process your data - the hook will be called automatically
70
+ output = processor(input_data)
71
+
72
+ # Remove the hook when done debugging
73
+ processor.unregister_after_step_hook(check_nans)
74
+ ```
75
+
76
+ ### How Hooks Work Internally
77
+
78
+ Understanding the internal mechanism helps you use hooks more effectively. The pipeline maintains two separate lists: one for before-step hooks and another for after-step hooks. When you register a hook, it's simply appended to the appropriate list.
79
+
80
+ During execution, the pipeline follows a strict sequence: for each processing step, it first calls all before-hooks in registration order, then executes the actual step transformation, and finally calls all after-hooks in registration order. This creates a predictable, sandwich-like structure around each step.
81
+
82
+ The key insight is that hooks don't change the core pipeline logic—they're purely additive. The pipeline's `_forward` method orchestrates this dance between hooks and processing steps, ensuring that your debugging or monitoring code runs at exactly the right moments without interfering with the main data flow.
83
+
84
+ Here's a simplified view of how the pipeline executes hooks:
85
+
86
+ ```python
87
+ class DataProcessorPipeline:
88
+ def __init__(self):
89
+ self.steps = [...]
90
+ self.before_step_hooks = [] # List of before hooks
91
+ self.after_step_hooks = [] # List of after hooks
92
+
93
+ def _forward(self, transition):
94
+ """Internal method that processes the transition through all steps."""
95
+ for step_idx, processor_step in enumerate(self.steps):
96
+ # 1. Call all BEFORE hooks
97
+ for hook in self.before_step_hooks:
98
+ hook(step_idx, transition)
99
+
100
+ # 2. Execute the actual processing step
101
+ transition = processor_step(transition)
102
+
103
+ # 3. Call all AFTER hooks
104
+ for hook in self.after_step_hooks:
105
+ hook(step_idx, transition)
106
+
107
+ return transition
108
+
109
+ def register_before_step_hook(self, hook_fn):
110
+ self.before_step_hooks.append(hook_fn)
111
+
112
+ def register_after_step_hook(self, hook_fn):
113
+ self.after_step_hooks.append(hook_fn)
114
+ ```
115
+
116
+ ### Execution Flow
117
+
118
+ The execution flow looks like this:
119
+
120
+ ```
121
+ Input → Before Hook → Step 0 → After Hook → Before Hook → Step 1 → After Hook → ... → Output
122
+ ```
123
+
124
+ For example, with 3 steps and both hook types:
125
+
126
+ ```python
127
+ def timing_before(step_idx, transition):
128
+ print(f"⏱️ Starting step {step_idx}")
129
+
130
+ def validation_after(step_idx, transition):
131
+ print(f"✅ Completed step {step_idx}")
132
+
133
+ processor.register_before_step_hook(timing_before)
134
+ processor.register_after_step_hook(validation_after)
135
+
136
+ # This will output:
137
+ # ⏱️ Starting step 0
138
+ # ✅ Completed step 0
139
+ # ⏱️ Starting step 1
140
+ # ✅ Completed step 1
141
+ # ⏱️ Starting step 2
142
+ # ✅ Completed step 2
143
+ ```
144
+
145
+ ### Multiple Hooks
146
+
147
+ You can register multiple hooks of the same type - they execute in the order registered:
148
+
149
+ ```python
150
+ def log_shapes(step_idx: int, transition: EnvTransition):
151
+ obs = transition.get(TransitionKey.OBSERVATION)
152
+ if obs:
153
+ print(f"Step {step_idx} observation shapes:")
154
+ for key, value in obs.items():
155
+ if isinstance(value, torch.Tensor):
156
+ print(f" {key}: {value.shape}")
157
+
158
+ processor.register_after_step_hook(check_nans) # Executes first
159
+ processor.register_after_step_hook(log_shapes) # Executes second
160
+
161
+ # Both hooks will be called after each step in registration order
162
+ output = processor(input_data)
163
+ ```
164
+
165
+ While hooks are excellent for monitoring specific issues (like NaN detection) or gathering metrics during normal pipeline execution, sometimes you need to dive deeper. When you want to understand exactly what happens at each step or debug complex transformation logic, step-through debugging provides the detailed inspection you need.
166
+
167
+ ## Step-Through Debugging
168
+
169
+ Step-through debugging is like having a slow-motion replay for your pipeline. Instead of watching your data get transformed in one quick blur from input to output, you can pause and examine what happens after each individual step.
170
+
171
+ This approach is particularly valuable when you're trying to understand a complex pipeline, debug unexpected behavior, or verify that each transformation is working as expected. Unlike hooks, which are great for automated monitoring, step-through debugging gives you manual, interactive control over the inspection process.
172
+
173
+ The `step_through()` method is a generator that yields the transition state after each processing step, allowing you to inspect intermediate results. Think of it as creating a series of snapshots of your data as it flows through the pipeline—each snapshot shows you exactly what your data looks like after one more transformation has been applied.
174
+
175
+ ### How Step-Through Works
176
+
177
+ The `step_through()` method fundamentally changes how the pipeline executes. Instead of running all steps in sequence and only returning the final result, it transforms the pipeline into an iterator that yields intermediate results.
178
+
179
+ Here's what happens internally: the method starts by converting your input data into the pipeline's internal transition format, then yields this initial state. Next, it applies the first processing step and yields the result. Then it applies the second step to that result and yields again, and so on. Each `yield` gives you a complete snapshot of the transition at that point.
180
+
181
+ This generator pattern is powerful because it's lazy—the pipeline only computes the next step when you ask for it. This means you can stop at any point, inspect the current state thoroughly, and decide whether to continue. You're not forced to run the entire pipeline just to debug one problematic step.
182
+
183
+ Instead of running the entire pipeline and only seeing the final result, `step_through()` pauses after each step and gives you the intermediate transition:
184
+
185
+ ```python
186
+ # This creates a generator that yields intermediate states
187
+ for i, intermediate_result in enumerate(processor.step_through(input_data)):
188
+ print(f"=== After step {i} ===")
189
+
190
+ # Inspect the observation at this stage
191
+ obs = intermediate_result.get(TransitionKey.OBSERVATION)
192
+ if obs:
193
+ for key, value in obs.items():
194
+ if isinstance(value, torch.Tensor):
195
+ print(f"{key}: shape={value.shape}, dtype={value.dtype}")
196
+ ```
197
+
198
+ ### Interactive Debugging with Breakpoints
199
+
200
+ You can add breakpoints in the step-through loop to interactively debug:
201
+
202
+ ```python
203
+ # Step through the pipeline with debugging
204
+ for i, intermediate in enumerate(processor.step_through(data)):
205
+ print(f"Step {i}: {processor.steps[i].__class__.__name__}")
206
+
207
+ # Set a breakpoint to inspect the current state
208
+ breakpoint() # Debugger will pause here
209
+
210
+ # You can now inspect 'intermediate' in the debugger:
211
+ # - Check tensor shapes and values
212
+ # - Verify expected transformations
213
+ # - Look for unexpected changes
214
+ ```
215
+
216
+ During the debugger session, you can:
217
+
218
+ - Examine `intermediate[TransitionKey.OBSERVATION]` to see observation data
219
+ - Check `intermediate[TransitionKey.ACTION]` for action transformations
220
+ - Inspect any part of the transition to understand what each step does
221
+
222
+ Step-through debugging is perfect for understanding the _data_ transformations, but what about the _structure_ of that data? While hooks and step-through help you debug runtime behavior, you also need to ensure your pipeline produces data in the format expected by downstream components. This is where feature contract validation comes in.
223
+
224
+ ## Validating Feature Contracts
225
+
226
+ Feature contracts define what data structure your pipeline expects as input and produces as output.
227
+ Validating these contracts helps catch mismatches early.
228
+
229
+ ### Understanding Feature Contracts
230
+
231
+ Each processor step has a `transform_features()` method that describes how it changes the data structure:
232
+
233
+ ```python
234
+ # Get the expected output features from your pipeline
235
+ initial_features = {
236
+ PipelineFeatureType.OBSERVATION: {
237
+ "observation.state": PolicyFeature(type=FeatureType.STATE, shape=(7,)),
238
+ "observation.image": PolicyFeature(type=FeatureType.IMAGE, shape=(3, 224, 224))
239
+ },
240
+ PipelineFeatureType.ACTION: {
241
+ "action": PolicyFeature(type=FeatureType.ACTION, shape=(4,))
242
+ }
243
+ }
244
+
245
+ # Check what your pipeline will output
246
+ output_features = processor.transform_features(initial_features)
247
+
248
+ print("Input features:")
249
+ for feature_type, features in initial_features.items():
250
+ print(f" {feature_type}:")
251
+ for key, feature in features.items():
252
+ print(f" {key}: {feature.type.value}, shape={feature.shape}")
253
+
254
+ print("\nOutput features:")
255
+ for feature_type, features in output_features.items():
256
+ print(f" {feature_type}:")
257
+ for key, feature in features.items():
258
+ print(f" {key}: {feature.type.value}, shape={feature.shape}")
259
+ ```
260
+
261
+ ### Verifying Expected Features
262
+
263
+ Check that your pipeline produces the features you expect:
264
+
265
+ ```python
266
+ # Define what features you expect the pipeline to produce
267
+ expected_keys = ["observation.state", "observation.image", "action"]
268
+
269
+ print("Validating feature contract...")
270
+ for expected_key in expected_keys:
271
+ found = False
272
+ for feature_type, features in output_features.items():
273
+ if expected_key in features:
274
+ feature = features[expected_key]
275
+ print(f"✅ {expected_key}: {feature.type.value}, shape={feature.shape}")
276
+ found = True
277
+ break
278
+
279
+ if not found:
280
+ print(f"❌ Missing expected feature: {expected_key}")
281
+ ```
282
+
283
+ This validation helps ensure your pipeline will work correctly with downstream components that expect specific data structures.
284
+
285
+ ## Summary
286
+
287
+ Now that you understand the three debugging approaches, you can tackle any pipeline issue systematically:
288
+
289
+ 1. **Hooks** - For runtime monitoring and validation without modifying pipeline code
290
+ 2. **Step-through** - For inspecting intermediate states and understanding transformations
291
+ 3. **Feature validation** - For ensuring data structure contracts are met
292
+
293
+ **When to use each approach:**
294
+
295
+ - Start with **step-through debugging** when you need to understand what your pipeline does or when something unexpected happens
296
+ - Add **hooks** for continuous monitoring during development and production to catch issues automatically
297
+ - Use **feature validation** before deployment to ensure your pipeline works with downstream components
298
+
299
+ These three tools work together to give you the complete observability that complex pipelines naturally lack. With hooks watching for issues, step-through helping you understand behavior, and feature validation ensuring compatibility, you'll be able to debug any pipeline confidently and efficiently.
docs/source/earthrover_mini_plus.mdx ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EarthRover Mini Plus
2
+
3
+ <img
4
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Earth_Rover_Mini_5_240c9adc-4f9e-44b7-982f-5d1dc24af1d8.png.webp"
5
+ alt="EarthRover Mini Plus"
6
+ width="70%"
7
+ />
8
+
9
+ The EarthRover Mini Plus is a fully open source mobile robot that connects through the cloud using the Frodobots SDK. This lets you control the robot and record datasets for training AI models.
10
+
11
+ ## What You Need
12
+
13
+ ### Hardware
14
+
15
+ - EarthRover Mini robot
16
+ - Computer with Python 3.10 or newer
17
+ - Internet connection
18
+
19
+ ### Setting Up the Frodobots SDK
20
+
21
+ The robot needs the [Frodobots SDK](https://github.com/frodobots-org/earth-rovers-sdk) running on your computer. Here's how:
22
+
23
+ 1. Download and install the SDK:
24
+
25
+ ```bash
26
+ git clone https://github.com/frodobots-org/earth-rovers-sdk.git
27
+ cd earth-rovers-sdk
28
+ pip install -r requirements.txt
29
+ ```
30
+
31
+ 2. Save Credentials:
32
+
33
+ Write your .env variables with the SDK API key and bot name provided by the Frodobots team.
34
+
35
+ ```bash
36
+ SDK_API_TOKEN=your_sdk_api_token_here
37
+ BOT_SLUG=your_bot_slug_here
38
+ CHROME_EXECUTABLE_PATH=/path/to/chrome_or_chromium
39
+ # Default value is MAP_ZOOM_LEVEL=18 https://wiki.openstreetmap.org/wiki/Zoom_levels
40
+ MAP_ZOOM_LEVEL=18
41
+ MISSION_SLUG=your_mission_slug_here
42
+ # Image quality between 0.1 and 1.0 (default: 0.8)
43
+ # Recommended: 0.8 for better performance
44
+ IMAGE_QUALITY=0.8
45
+ # Image format: jpeg, png or webp (default: png)
46
+ # Recommended: jpeg for better performance and lower bandwidth usage
47
+ IMAGE_FORMAT=jpeg
48
+ ```
49
+
50
+ 3. Start the SDK:
51
+
52
+ ```bash
53
+ hypercorn main:app --reload
54
+ ```
55
+
56
+ 4. Open your web browser and go to `http://localhost:8000`, then click "Join"
57
+
58
+ The SDK gives you:
59
+
60
+ - Live video from front and rear cameras
61
+
62
+ > [!IMPORTANT]
63
+ > The SDK must be running before you can use the robot.
64
+
65
+ ## Install LeRobot
66
+
67
+ Follow our [Installation Guide](./installation) to install LeRobot.
68
+
69
+ In addition to the base installation, install the EarthRover Mini dependencies:
70
+
71
+ ```bash
72
+ pip install -e .
73
+ ```
74
+
75
+ ## How It Works
76
+
77
+ The robot uses the internet to communicate:
78
+
79
+ - **Movement commands**: Sent through the SDK
80
+ - **Camera video**: Received from the SDK
81
+ - **Robot info**: Battery, location, speed from the SDK
82
+
83
+ You don't need to plug anything in - it all works through the SDK.
84
+
85
+ ## Calibration
86
+
87
+ No calibration needed! The robot is ready to use as soon as the SDK is running.
88
+
89
+ ## Controlling the Robot
90
+
91
+ You control the robot using your keyboard - just like playing a video game with WASD keys.
92
+
93
+ ### Keyboard Controls
94
+
95
+ | Key | Action |
96
+ | --- | -------------------------------- |
97
+ | W | Move forward |
98
+ | S | Move backward |
99
+ | A | Turn left (with forward motion) |
100
+ | D | Turn right (with forward motion) |
101
+ | Q | Rotate left in place |
102
+ | E | Rotate right in place |
103
+ | X | Stop all movement |
104
+ | +/= | Increase speed |
105
+ | - | Decrease speed |
106
+ | ESC | Disconnect |
107
+
108
+ ### Speed Settings
109
+
110
+ You can adjust how fast the robot moves:
111
+
112
+ - **Forward/backward speed**: Default is full speed (1.0)
113
+ - **Turning speed**: Default is full speed (1.0)
114
+ - **Speed changes**: Use +/- keys to adjust by 0.1 each time
115
+
116
+ ### Try It Out
117
+
118
+ Test driving the robot before recording data:
119
+
120
+ ```python
121
+ from lerobot.robots.earthrover_mini_plus import EarthRoverMiniPlus, EarthRoverMiniPlusConfig
122
+ from lerobot.teleoperators.keyboard import KeyboardRoverTeleop, KeyboardRoverTeleopConfig
123
+
124
+ # Initialize robot
125
+ robot_config = EarthRoverMiniPlusConfig()
126
+ robot = EarthRoverMiniPlus(robot_config)
127
+
128
+ # Initialize teleoperator
129
+ teleop_config = KeyboardRoverTeleopConfig(
130
+ linear_speed=1.0,
131
+ angular_speed=1.0,
132
+ speed_increment=0.1
133
+ )
134
+ teleop = KeyboardRoverTeleop(teleop_config)
135
+
136
+ # Connect
137
+ robot.connect()
138
+ teleop.connect()
139
+
140
+ # Teleoperate (use keyboard controls)
141
+ try:
142
+ while True:
143
+ action = teleop.get_action()
144
+ robot.send_action(action)
145
+ except KeyboardInterrupt:
146
+ pass
147
+ finally:
148
+ robot.disconnect()
149
+ teleop.disconnect()
150
+ ```
151
+
152
+ > [!TIP]
153
+ > If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
154
+
155
+ ## Recording Data
156
+
157
+ Once you can drive the robot well, you can start recording data to train AI models. The system records:
158
+
159
+ - **What you do**: How you move the robot (forward, backward, turning)
160
+ - **What the robot sees**:
161
+ - Videos from both cameras
162
+ - Robot speed and direction
163
+ - Battery level and location
164
+ - GPS position and signal
165
+ - Other sensor data
166
+ - **When it happened**: Timestamps for everything
167
+
168
+ ### Setting Up Hugging Face
169
+
170
+ We use Hugging Face to store your data online. First, log in with your token from [Hugging Face settings](https://huggingface.co/settings/tokens):
171
+
172
+ ```bash
173
+ huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
174
+ ```
175
+
176
+ Store your Hugging Face username:
177
+
178
+ ```bash
179
+ HF_USER=$(huggingface-cli whoami | head -n 1)
180
+ echo $HF_USER
181
+ ```
182
+
183
+ ### Start Recording
184
+
185
+ Use the standard recording command:
186
+
187
+ ```bash
188
+ python src/lerobot/scripts/lerobot_record.py \
189
+ --robot.type=earthrover_mini_plus \
190
+ --teleop.type=keyboard_rover \
191
+ --dataset.repo_id=your_username/dataset_name \
192
+ --dataset.num_episodes=2 \
193
+ --dataset.fps=10 \
194
+ --dataset.single_task="Navigate around obstacles" \
195
+ --display_data=true
196
+ ```
197
+
198
+ Replace `your_username/dataset_name` with your Hugging Face username and a name for your dataset.
199
+
200
+ ### What Gets Saved
201
+
202
+ Your dataset includes:
203
+
204
+ **Your Actions (2 things)**:
205
+
206
+ - How much you moved forward/backward
207
+ - How much you turned left/right
208
+
209
+ **Robot Observations (12 things)**:
210
+
211
+ - Front camera video
212
+ - Rear camera video
213
+ - Current speed
214
+ - Battery level
215
+ - Which way the robot is facing
216
+ - GPS location (latitude, longitude, signal strength)
217
+ - Network signal strength
218
+ - Vibration level
219
+ - Lamp status (on/off)
220
+
221
+ ### Where Your Data Goes
222
+
223
+ On your computer: `~/.cache/huggingface/lerobot/{repo-id}`
224
+
225
+ After recording, your data automatically uploads to your Hugging Face page:
226
+
227
+ ```bash
228
+ echo https://huggingface.co/datasets/${HF_USER}/earthrover-navigation
229
+ ```
230
+
231
+ Your dataset will be tagged with `LeRobot` for community discovery.
docs/source/env_processor.mdx ADDED
@@ -0,0 +1,418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Environment Processors
2
+
3
+ Environment processors are a critical layer in LeRobot's data processing architecture that handle **environment-specific** transformations, separate from policy-specific processing. This separation of concerns enables cleaner code, better modularity, and easier experimentation with different environments and policies.
4
+
5
+ ## Why Environment Processors?
6
+
7
+ When working with different robot environments (LIBERO, MetaWorld, Aloha, etc.), each environment often has unique data formats, coordinate systems, and conventions that need standardization **before** policy processing. Without environment processors, these transformations would be:
8
+
9
+ 1. **Hardcoded in environment code** - Making it difficult to experiment with different state representations
10
+ 2. **Duplicated across policies** - Each policy would need to handle environment-specific quirks
11
+ 3. **Mixed with policy logic** - Violating separation of concerns and making debugging harder
12
+
13
+ Environment processors solve this by providing a **dedicated processing layer** between raw environment observations and policy inputs.
14
+
15
+ ## The Processing Pipeline
16
+
17
+ Here's how data flows through the complete processing pipeline during evaluation:
18
+
19
+ ```python
20
+ # In lerobot_eval.py rollout() function:
21
+
22
+ # 1. Raw environment observation (numpy arrays, various formats)
23
+ raw_observation = env.step(action)
24
+
25
+ # 2. Convert numpy to torch, normalize images [0,1]
26
+ observation = preprocess_observation(raw_observation)
27
+
28
+ # 3. Add task metadata (for multi-task environments)
29
+ observation = add_envs_task(env, observation)
30
+
31
+ # 4. ENVIRONMENT-SPECIFIC preprocessing (NEW!)
32
+ # - Flatten robot states
33
+ # - Rotate images to match dataset conventions
34
+ # - Handle environment-specific coordinate systems
35
+ observation = env_preprocessor(observation)
36
+
37
+ # 5. POLICY-SPECIFIC preprocessing
38
+ # - Normalize with dataset statistics
39
+ # - Add batch dimensions
40
+ # - Move to GPU
41
+ # - Tokenize language instructions
42
+ observation = preprocessor(observation)
43
+
44
+ # 6. Policy inference
45
+ action = policy.select_action(observation)
46
+
47
+ # 7. POLICY-SPECIFIC postprocessing
48
+ # - Unnormalize actions
49
+ # - Remove batch dimensions
50
+ action = postprocessor(action)
51
+
52
+ # 8. ENVIRONMENT-SPECIFIC postprocessing (NEW!)
53
+ # - Convert action formats if needed
54
+ # - Apply environment-specific constraints
55
+ action_transition = {"action": action}
56
+ action_transition = env_postprocessor(action_transition)
57
+ action = action_transition["action"]
58
+
59
+ # 9. Execute in environment
60
+ env.step(action)
61
+ ```
62
+
63
+ ## The Benefits
64
+
65
+ ### 1. **Separation of Concerns**
66
+
67
+ Environment processors handle transformations specific to the **environment's data format**, while policy processors handle transformations specific to the **model's requirements**.
68
+
69
+ ```python
70
+ # ❌ Before: Mixed concerns
71
+ class LiberoVLAPolicy:
72
+ def preprocess(self, obs):
73
+ # Environment-specific: Flatten robot state (shouldn't be in policy!)
74
+ state = self._flatten_robot_state(obs["robot_state"])
75
+ # Policy-specific: Normalize with dataset stats
76
+ state = self.normalizer(state)
77
+ return state
78
+
79
+ # ✅ After: Clear separation
80
+ # Environment processor: Handles LIBERO's nested robot state
81
+ env_preprocessor = LiberoProcessorStep() # Flattens robot_state
82
+
83
+ # Policy processor: Handles model requirements
84
+ policy_preprocessor = NormalizerProcessorStep(stats=dataset_stats)
85
+ ```
86
+
87
+ ### 2. **Flexibility and Reusability**
88
+
89
+ The same policy can work with different environment processors, and the same environment processor can work with different policies:
90
+
91
+ ```python
92
+ # Use SmolVLA policy with LIBERO environment
93
+ libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
94
+ smolvla_preprocessor, smolvla_postprocessor = make_pre_post_processors(smolvla_cfg)
95
+
96
+ # Or use ACT policy with the same LIBERO environment
97
+ libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
98
+ act_preprocessor, act_postprocessor = make_pre_post_processors(act_cfg)
99
+ ```
100
+
101
+ ### 3. **Easier Experimentation**
102
+
103
+ Want to try different state representations for LIBERO? Just create a new processor:
104
+
105
+ ```python
106
+ # Original: 8D state (pos + quat→axisangle + gripper)
107
+ @ProcessorStepRegistry.register("libero_processor")
108
+ class LiberoProcessorStep(ObservationProcessorStep):
109
+ def _process_observation(self, obs):
110
+ eef_pos = robot_state["eef"]["pos"] # 3D
111
+ eef_axisangle = quat2axisangle(quat) # 3D
112
+ gripper = robot_state["gripper"]["qpos"] # 2D
113
+ state = torch.cat([eef_pos, eef_axisangle, gripper], dim=-1) # 8D
114
+ return state
115
+
116
+ # Experiment: Add velocity for better control
117
+ @ProcessorStepRegistry.register("libero_velocity_processor")
118
+ class LiberoVelocityProcessorStep(ObservationProcessorStep):
119
+ def _process_observation(self, obs):
120
+ # Include velocities for 14D state
121
+ eef_pos = robot_state["eef"]["pos"] # 3D
122
+ eef_axisangle = quat2axisangle(quat) # 3D
123
+ eef_vel = robot_state["eef"]["vel"] # 3D (NEW)
124
+ gripper_pos = robot_state["gripper"]["qpos"] # 2D
125
+ gripper_vel = robot_state["gripper"]["qvel"] # 3D (NEW)
126
+ state = torch.cat([eef_pos, eef_axisangle, eef_vel,
127
+ gripper_pos, gripper_vel], dim=-1) # 14D
128
+ return state
129
+ ```
130
+
131
+ ### 4. **Cleaner Environment Code**
132
+
133
+ Environments expose **all available data** without needing to know what downstream models will use:
134
+
135
+ ```python
136
+ # LIBERO environment exposes full robot state
137
+ observation = {
138
+ "pixels": {"image": img, "image2": img2},
139
+ "robot_state": {
140
+ "eef": {"pos": ..., "quat": ..., "vel": ..., "mat": ..., "axisangle": ...},
141
+ "gripper": {"qpos": ..., "qvel": ...},
142
+ "joints": {"pos": ..., "vel": ...}
143
+ }
144
+ }
145
+
146
+ # Environment processor decides what to use
147
+ # Policy processor handles model-specific transformations
148
+ ```
149
+
150
+ ## Using Environment Processors
151
+
152
+ ### Factory Function
153
+
154
+ The `make_env_pre_post_processors` function follows the same pattern as `make_pre_post_processors` for policies:
155
+
156
+ ```python
157
+ from lerobot.envs.factory import make_env_pre_post_processors
158
+ from lerobot.envs.configs import LiberoEnv, PushtEnv
159
+
160
+ # For LIBERO: Returns LiberoProcessorStep in preprocessor
161
+ libero_cfg = LiberoEnv(task="libero_spatial", camera_name=["agentview"])
162
+ env_preprocessor, env_postprocessor = make_env_pre_post_processors(libero_cfg)
163
+
164
+ # For other environments: Returns identity processors (no-op)
165
+ pusht_cfg = PushtEnv()
166
+ env_preprocessor, env_postprocessor = make_env_pre_post_processors(pusht_cfg)
167
+ ```
168
+
169
+ ### Implementation in `envs/factory.py`
170
+
171
+ ```python
172
+ def make_env_pre_post_processors(
173
+ env_cfg: EnvConfig,
174
+ ) -> tuple[
175
+ PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
176
+ PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
177
+ ]:
178
+ """
179
+ Create preprocessor and postprocessor pipelines for environment observations.
180
+
181
+ Args:
182
+ env_cfg: The configuration of the environment.
183
+
184
+ Returns:
185
+ A tuple containing:
186
+ - preprocessor: Pipeline that processes environment observations
187
+ - postprocessor: Pipeline that processes environment outputs
188
+ """
189
+ # For LIBERO environments, add the LiberoProcessorStep to preprocessor
190
+ if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
191
+ preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
192
+ else:
193
+ # For all other environments, return an identity preprocessor
194
+ preprocessor = PolicyProcessorPipeline(steps=[])
195
+
196
+ # Postprocessor is currently identity for all environments
197
+ # Future: Could add environment-specific action transformations
198
+ postprocessor = PolicyProcessorPipeline(steps=[])
199
+
200
+ return preprocessor, postprocessor
201
+ ```
202
+
203
+ ### Integration in Evaluation
204
+
205
+ In `lerobot_eval.py`, the environment processors are created once and used throughout:
206
+
207
+ ```python
208
+ def eval_main(cfg: EvalPipelineConfig):
209
+ # Create environment
210
+ envs = make_env(cfg.env, n_envs=cfg.eval.batch_size)
211
+
212
+ # Create policy
213
+ policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env)
214
+
215
+ # Create policy processors
216
+ preprocessor, postprocessor = make_pre_post_processors(
217
+ policy_cfg=cfg.policy,
218
+ pretrained_path=cfg.policy.pretrained_path,
219
+ )
220
+
221
+ # Create environment processors (NEW!)
222
+ env_preprocessor, env_postprocessor = make_env_pre_post_processors(env_cfg=cfg.env)
223
+
224
+ # Run evaluation with both processor types
225
+ eval_policy_all(
226
+ envs=envs,
227
+ policy=policy,
228
+ env_preprocessor=env_preprocessor, # Environment-specific
229
+ env_postprocessor=env_postprocessor, # Environment-specific
230
+ preprocessor=preprocessor, # Policy-specific
231
+ postprocessor=postprocessor, # Policy-specific
232
+ n_episodes=cfg.eval.n_episodes,
233
+ )
234
+ ```
235
+
236
+ ## Example: LIBERO Environment Processor
237
+
238
+ The `LiberoProcessorStep` demonstrates a real-world environment processor:
239
+
240
+ ```python
241
+ from lerobot.processor.pipeline import ObservationProcessorStep
242
+
243
+ @dataclass
244
+ @ProcessorStepRegistry.register(name="libero_processor")
245
+ class LiberoProcessorStep(ObservationProcessorStep):
246
+ """
247
+ Processes LIBERO observations into the LeRobot format.
248
+
249
+ **State Processing:**
250
+ - Extracts end-effector position (3D)
251
+ - Converts quaternion to axis-angle representation (3D)
252
+ - Extracts gripper joint positions (2D)
253
+ - Concatenates into 8D state vector
254
+
255
+ **Image Processing:**
256
+ - Rotates images 180° to match HuggingFaceVLA/libero convention
257
+ """
258
+
259
+ def _process_observation(self, observation):
260
+ processed_obs = observation.copy()
261
+
262
+ # Process images: Flip 180° for camera convention
263
+ for key in list(processed_obs.keys()):
264
+ if key.startswith("observation.images."):
265
+ img = processed_obs[key]
266
+ img = torch.flip(img, dims=[2, 3]) # Flip H and W
267
+ processed_obs[key] = img
268
+
269
+ # Process robot_state: Flatten to 8D vector
270
+ if "observation.robot_state" in processed_obs:
271
+ robot_state = processed_obs.pop("observation.robot_state")
272
+
273
+ eef_pos = robot_state["eef"]["pos"] # (B, 3)
274
+ eef_quat = robot_state["eef"]["quat"] # (B, 4)
275
+ gripper_qpos = robot_state["gripper"]["qpos"] # (B, 2)
276
+
277
+ # Convert quaternion to axis-angle
278
+ eef_axisangle = self._quat2axisangle(eef_quat) # (B, 3)
279
+
280
+ # Concatenate into single state vector
281
+ state = torch.cat((eef_pos, eef_axisangle, gripper_qpos), dim=-1)
282
+ state = state.float()
283
+
284
+ processed_obs["observation.state"] = state
285
+
286
+ return processed_obs
287
+ ```
288
+
289
+ ### Why These Transformations?
290
+
291
+ 1. **Image Rotation**: The HuggingFaceVLA/libero dataset has images rotated 180° from the raw LIBERO simulator. The processor handles this convention mismatch so policies trained on the dataset work seamlessly.
292
+
293
+ 2. **State Flattening**: The raw LIBERO environment exposes nested dictionaries with all available state information (position, quaternion, velocity, matrix representation, etc.). The processor:
294
+ - Selects the relevant components (pos, quat, gripper)
295
+ - Converts quaternion to axis-angle (more suitable for learning)
296
+ - Flattens to a single 8D vector that policies expect
297
+
298
+ 3. **Flexibility**: The environment still exposes **all** raw data. If you want to try different state representations (e.g., including velocities, using matrix representation instead of axis-angle), you can create a new processor without modifying the environment code.
299
+
300
+ ## Adding Environment Processors for New Environments
301
+
302
+ To add environment processors for a new environment:
303
+
304
+ ### 1. Create the Processor Step
305
+
306
+ ```python
307
+ # In src/lerobot/processor/env_processor.py
308
+
309
+ @dataclass
310
+ @ProcessorStepRegistry.register(name="myenv_processor")
311
+ class MyEnvProcessorStep(ObservationProcessorStep):
312
+ """Process observations from MyEnv."""
313
+
314
+ def _process_observation(self, observation):
315
+ processed = observation.copy()
316
+
317
+ # Your environment-specific transformations
318
+ if "myenv.specific.state" in processed:
319
+ state = processed.pop("myenv.specific.state")
320
+ # Transform to standard format
321
+ processed["observation.state"] = self._transform_state(state)
322
+
323
+ return processed
324
+ ```
325
+
326
+ ### 2. Update the Factory
327
+
328
+ ```python
329
+ # In src/lerobot/envs/factory.py
330
+
331
+ def make_env_pre_post_processors(env_cfg: EnvConfig):
332
+ if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
333
+ preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
334
+ elif isinstance(env_cfg, MyEnvConfig) or "myenv" in env_cfg.type:
335
+ preprocessor = PolicyProcessorPipeline(steps=[MyEnvProcessorStep()])
336
+ else:
337
+ preprocessor = PolicyProcessorPipeline(steps=[])
338
+
339
+ postprocessor = PolicyProcessorPipeline(steps=[])
340
+ return preprocessor, postprocessor
341
+ ```
342
+
343
+ ### 3. Use in Evaluation
344
+
345
+ No changes needed! The evaluation script automatically uses the appropriate processor:
346
+
347
+ ```bash
348
+ lerobot-eval \
349
+ --policy.path=lerobot/my_policy \
350
+ --env.type=myenv \ # Automatically uses MyEnvProcessorStep
351
+ --eval.n_episodes=10
352
+ ```
353
+
354
+ ## Future: Environment Postprocessors
355
+
356
+ Currently, postprocessors are identity (no-op) for all environments. Future use cases include:
357
+
358
+ ### Action Space Transformations
359
+
360
+ ```python
361
+ @dataclass
362
+ class MyEnvActionPostprocessor(ProcessorStep):
363
+ """Convert policy actions to environment-specific format."""
364
+
365
+ def __call__(self, transition: EnvTransition) -> EnvTransition:
366
+ action = transition["action"]
367
+
368
+ # Example: Convert from Cartesian to joint space
369
+ if self.action_space == "joint":
370
+ action = self.ik_solver(action)
371
+
372
+ # Example: Apply environment-specific safety limits
373
+ action = torch.clamp(action, self.min_action, self.max_action)
374
+
375
+ transition["action"] = action
376
+ return transition
377
+ ```
378
+
379
+ ### Coordinate System Conversions
380
+
381
+ ```python
382
+ @dataclass
383
+ class CoordinateTransformPostprocessor(ProcessorStep):
384
+ """Transform actions between coordinate systems."""
385
+
386
+ def __call__(self, transition: EnvTransition) -> EnvTransition:
387
+ action = transition["action"]
388
+
389
+ # Example: Policy outputs in world frame, env expects base frame
390
+ action = self.world_to_base_transform(action)
391
+
392
+ transition["action"] = action
393
+ return transition
394
+ ```
395
+
396
+ ## Best Practices
397
+
398
+ 1. **Keep environment processors simple**: They should only handle environment-specific data format issues, not complex learning-related transformations.
399
+
400
+ 2. **Use policy processors for model requirements**: Normalization, batching, device placement, and tokenization belong in policy processors.
401
+
402
+ 3. **Expose all data from environments**: Let processors decide what to use rather than hardcoding choices in the environment.
403
+
404
+ 4. **Document conventions**: Clearly document any coordinate system conventions, camera orientations, or data formats that your processor handles.
405
+
406
+ 5. **Test independently**: Environment processors should be testable without loading full policies or environments.
407
+
408
+ ## Summary
409
+
410
+ Environment processors provide a **clean separation** between environment-specific data transformations and policy-specific model requirements. This architecture:
411
+
412
+ - ✅ Enables easy experimentation with different state representations
413
+ - ✅ Allows policies to work seamlessly across different environments
414
+ - ✅ Keeps environment code focused on simulation/hardware interface
415
+ - ✅ Makes processor pipelines more maintainable and debuggable
416
+ - ✅ Follows the single responsibility principle
417
+
418
+ The key insight: **Environments define data formats, processors standardize them, policies consume standardized data.** Each layer has a clear, focused responsibility.
docs/source/envhub.mdx ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading Environments from the Hub
2
+
3
+ The **EnvHub** feature allows you to load simulation environments directly from the Hugging Face Hub with a single line of code. This unlocks a powerful new model for collaboration: instead of environments being locked away inside monolithic libraries, anyone can publish custom environments and share them with the community.
4
+
5
+ ## What is EnvHub?
6
+
7
+ EnvHub lets you create custom robotics simulation environments with your own robot models and scenarios, and make them easily usable by anyone through the LeRobot framework.
8
+
9
+ EnvHub packages are stored on the Hugging Face Hub, and can be seamlessly pulled and used in your AI robotics projects through LeRobot with a single line of code.
10
+
11
+ Thanks to EnvHub, you can:
12
+
13
+ 1. **Create and publish environments** to the Hugging Face Hub as Git repositories, and distribute complex physics simulations without packaging hassles
14
+ 2. **Load environments** dynamically, without installing them as packages
15
+ 3. **Version and track** environment changes using Git semantics
16
+ 4. **Discover** new simulation tasks shared by the community
17
+
18
+ This design means you can go from discovering an interesting environment on the Hub to running experiments in seconds, or create your own custom robot and environment without worrying about dependency conflicts or complex installation procedures.
19
+
20
+ When you create an EnvHub package, you can build anything you want inside it and use any simulation tool you like: this is your own space to play with. The only requirement is that the package contains an `env.py` file that defines the environment and allows LeRobot to load and use your EnvHub package.
21
+
22
+ This `env.py` file needs to expose a small API so LeRobot can load and run it. In particular, you must provide a `make_env(n_envs: int = 1, use_async_envs: bool = False)` or `make_env(n_envs: int = 1, use_async_envs: bool = False, cfg: EnvConfig)` function, which is the main entry point for LeRobot. It should return one of:
23
+
24
+ - A `gym.vector.VectorEnv` (most common)
25
+ - A single `gym.Env` (will be automatically wrapped)
26
+ - A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks)
27
+
28
+ You can also pass an `EnvConfig` object to `make_env` to configure the environment (e.g. the number of environments, task, camera name, initial states, control mode, episode length, etc.).
29
+
30
+ Finally, your environment must implement the standard `gym.vector.VectorEnv` interface so it works with LeRobot, including methods like `reset` and `step`.
31
+
32
+ ## Quick Start
33
+
34
+ Loading an environment from the Hub is as simple as:
35
+
36
+ ```python
37
+ from lerobot.envs.factory import make_env
38
+
39
+ # Load a hub environment (requires explicit consent to run remote code)
40
+ env = make_env("lerobot/cartpole-env", trust_remote_code=True)
41
+ ```
42
+
43
+ <Tip warning={true}>
44
+ **Security Notice**: Loading environments from the Hub executes Python code
45
+ from third-party repositories. Only use `trust_remote_code=True` with
46
+ repositories you trust. We strongly recommend pinning to a specific commit
47
+ hash for reproducibility and security.
48
+ </Tip>
49
+
50
+ ## Repository Structure
51
+
52
+ To make your environment loadable from the Hub, your repository must contain at minimum:
53
+
54
+ ### Required Files
55
+
56
+ **`env.py`** (or custom Python file)
57
+
58
+ - Must expose a `make_env(n_envs: int, use_async_envs: bool)` function
59
+ - This function should return one of:
60
+ - A `gym.vector.VectorEnv` (most common)
61
+ - A single `gym.Env` (will be automatically wrapped)
62
+ - A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks)
63
+
64
+ ### Optional Files
65
+
66
+ **`requirements.txt`**
67
+
68
+ - List any additional dependencies your environment needs
69
+ - Users will need to install these manually before loading your environment
70
+
71
+ **`README.md`**
72
+
73
+ - Document your environment: what task it implements, observation/action spaces, rewards, etc.
74
+ - Include usage examples and any special setup instructions
75
+
76
+ **`.gitignore`**
77
+
78
+ - Exclude unnecessary files from your repository
79
+
80
+ ### Example Repository Structure
81
+
82
+ ```
83
+ my-environment-repo/
84
+ ├── env.py # Main environment definition (required)
85
+ ├── requirements.txt # Dependencies (optional)
86
+ ├── README.md # Documentation (recommended)
87
+ ├── assets/ # Images, videos, etc. (optional)
88
+ │ └── demo.gif
89
+ └── configs/ # Config files if needed (optional)
90
+ └── task_config.yaml
91
+ ```
92
+
93
+ ## Creating Your Environment Repository
94
+
95
+ ### Step 1: Define Your Environment
96
+
97
+ Create an `env.py` file with a `make_env` function:
98
+
99
+ ```python
100
+ # env.py
101
+ import gymnasium as gym
102
+
103
+ def make_env(n_envs: int = 1, use_async_envs: bool = False):
104
+ """
105
+ Create vectorized environments for your custom task.
106
+
107
+ Args:
108
+ n_envs: Number of parallel environments
109
+ use_async_envs: Whether to use AsyncVectorEnv or SyncVectorEnv
110
+
111
+ Returns:
112
+ gym.vector.VectorEnv or dict mapping suite names to vectorized envs
113
+ """
114
+ def _make_single_env():
115
+ # Create your custom environment
116
+ return gym.make("CartPole-v1")
117
+
118
+ # Choose vector environment type
119
+ env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
120
+
121
+ # Create vectorized environment
122
+ vec_env = env_cls([_make_single_env for _ in range(n_envs)])
123
+
124
+ return vec_env
125
+ ```
126
+
127
+ ### Step 2: Test Locally
128
+
129
+ Before uploading, test your environment locally:
130
+
131
+ ```python
132
+ from lerobot.envs.utils import _load_module_from_path, _call_make_env, _normalize_hub_result
133
+
134
+ # Load your module
135
+ module = _load_module_from_path("./env.py")
136
+
137
+ # Test the make_env function
138
+ result = _call_make_env(module, n_envs=2, use_async_envs=False)
139
+ normalized = _normalize_hub_result(result)
140
+
141
+ # Verify it works
142
+ suite_name = next(iter(normalized))
143
+ env = normalized[suite_name][0]
144
+ obs, info = env.reset()
145
+ print(f"Observation shape: {obs.shape if hasattr(obs, 'shape') else type(obs)}")
146
+ env.close()
147
+ ```
148
+
149
+ ### Step 3: Upload to the Hub
150
+
151
+ Upload your repository to Hugging Face:
152
+
153
+ ```bash
154
+ # Install huggingface_hub if needed
155
+ pip install huggingface_hub
156
+
157
+ # Login to Hugging Face
158
+ huggingface-cli login
159
+
160
+ # Create a new repository
161
+ huggingface-cli repo create my-custom-env --type space --org my-org
162
+
163
+ # Initialize git and push
164
+ git init
165
+ git add .
166
+ git commit -m "Initial environment implementation"
167
+ git remote add origin https://huggingface.co/my-org/my-custom-env
168
+ git push -u origin main
169
+ ```
170
+
171
+ Alternatively, use the `huggingface_hub` Python API:
172
+
173
+ ```python
174
+ from huggingface_hub import HfApi
175
+
176
+ api = HfApi()
177
+
178
+ # Create repository
179
+ api.create_repo("my-custom-env", repo_type="space")
180
+
181
+ # Upload files
182
+ api.upload_folder(
183
+ folder_path="./my-env-folder",
184
+ repo_id="username/my-custom-env",
185
+ repo_type="space",
186
+ )
187
+ ```
188
+
189
+ ## Loading Environments from the Hub
190
+
191
+ ### Basic Usage
192
+
193
+ ```python
194
+ from lerobot.envs.factory import make_env
195
+
196
+ # Load from the hub
197
+ envs_dict = make_env(
198
+ "username/my-custom-env",
199
+ n_envs=4,
200
+ trust_remote_code=True
201
+ )
202
+
203
+ # Access the environment
204
+ suite_name = next(iter(envs_dict))
205
+ env = envs_dict[suite_name][0]
206
+
207
+ # Use it like any gym environment
208
+ obs, info = env.reset()
209
+ action = env.action_space.sample()
210
+ obs, reward, terminated, truncated, info = env.step(action)
211
+ ```
212
+
213
+ ### Advanced: Pinning to Specific Versions
214
+
215
+ For reproducibility and security, pin to a specific Git revision:
216
+
217
+ ```python
218
+ # Pin to a specific branch
219
+ env = make_env("username/my-env@main", trust_remote_code=True)
220
+
221
+ # Pin to a specific commit (recommended for papers/experiments)
222
+ env = make_env("username/my-env@abc123def456", trust_remote_code=True)
223
+
224
+ # Pin to a tag
225
+ env = make_env("username/my-env@v1.0.0", trust_remote_code=True)
226
+ ```
227
+
228
+ ### Custom File Paths
229
+
230
+ If your environment definition is not in `env.py`:
231
+
232
+ ```python
233
+ # Load from a custom file
234
+ env = make_env("username/my-env:custom_env.py", trust_remote_code=True)
235
+
236
+ # Combine with version pinning
237
+ env = make_env("username/my-env@v1.0:envs/task_a.py", trust_remote_code=True)
238
+ ```
239
+
240
+ ### Async Environments
241
+
242
+ For better performance with multiple environments:
243
+
244
+ ```python
245
+ envs_dict = make_env(
246
+ "username/my-env",
247
+ n_envs=8,
248
+ use_async_envs=True, # Use AsyncVectorEnv for parallel execution
249
+ trust_remote_code=True
250
+ )
251
+ ```
252
+
253
+ ## URL Format Reference
254
+
255
+ The hub URL format supports several patterns:
256
+
257
+ | Pattern | Description | Example |
258
+ | -------------------- | ------------------------------ | -------------------------------------- |
259
+ | `user/repo` | Load `env.py` from main branch | `make_env("lerobot/pusht-env")` |
260
+ | `user/repo@revision` | Load from specific revision | `make_env("lerobot/pusht-env@main")` |
261
+ | `user/repo:path` | Load custom file | `make_env("lerobot/envs:pusht.py")` |
262
+ | `user/repo@rev:path` | Revision + custom file | `make_env("lerobot/envs@v1:pusht.py")` |
263
+
264
+ ## Multi-Task Environments
265
+
266
+ For benchmarks with multiple tasks (like LIBERO), return a nested dictionary:
267
+
268
+ ```python
269
+ def make_env(n_envs: int = 1, use_async_envs: bool = False):
270
+ env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
271
+
272
+ # Return dict: {suite_name: {task_id: VectorEnv}}
273
+ return {
274
+ "suite_1": {
275
+ 0: env_cls([lambda: gym.make("Task1-v0") for _ in range(n_envs)]),
276
+ 1: env_cls([lambda: gym.make("Task2-v0") for _ in range(n_envs)]),
277
+ },
278
+ "suite_2": {
279
+ 0: env_cls([lambda: gym.make("Task3-v0") for _ in range(n_envs)]),
280
+ }
281
+ }
282
+ ```
283
+
284
+ ## Security Considerations
285
+
286
+ <Tip warning={true}>
287
+ **Important**: The `trust_remote_code=True` flag is required to execute
288
+ environment code from the Hub. This is by design for security.
289
+ </Tip>
290
+
291
+ When loading environments from the Hub:
292
+
293
+ 1. **Review the code first**: Visit the repository and inspect `env.py` before loading
294
+ 2. **Pin to commits**: Use specific commit hashes for reproducibility
295
+ 3. **Check dependencies**: Review `requirements.txt` for suspicious packages
296
+ 4. **Use trusted sources**: Prefer official organizations or well-known researchers
297
+ 5. **Sandbox if needed**: Run untrusted code in isolated environments (containers, VMs)
298
+
299
+ Example of safe usage:
300
+
301
+ ```python
302
+ # ❌ BAD: Loading without inspection
303
+ env = make_env("random-user/untrusted-env", trust_remote_code=True)
304
+
305
+ # ✅ GOOD: Review code, then pin to specific commit
306
+ # 1. Visit https://huggingface.co/trusted-org/verified-env
307
+ # 2. Review the env.py file
308
+ # 3. Copy the commit hash
309
+ env = make_env("trusted-org/verified-env@a1b2c3d4", trust_remote_code=True)
310
+ ```
311
+
312
+ ## Example: CartPole from the Hub
313
+
314
+ Here's a complete example using the reference CartPole environment:
315
+
316
+ ```python
317
+ from lerobot.envs.factory import make_env
318
+ import numpy as np
319
+
320
+ # Load the environment
321
+ envs_dict = make_env("lerobot/cartpole-env", n_envs=4, trust_remote_code=True)
322
+
323
+ # Get the vectorized environment
324
+ suite_name = next(iter(envs_dict))
325
+ env = envs_dict[suite_name][0]
326
+
327
+ # Run a simple episode
328
+ obs, info = env.reset()
329
+ done = np.zeros(env.num_envs, dtype=bool)
330
+ total_reward = np.zeros(env.num_envs)
331
+
332
+ while not done.all():
333
+ # Random policy
334
+ action = env.action_space.sample()
335
+ obs, reward, terminated, truncated, info = env.step(action)
336
+ total_reward += reward
337
+ done = terminated | truncated
338
+
339
+ print(f"Average reward: {total_reward.mean():.2f}")
340
+ env.close()
341
+ ```
342
+
343
+ ## Benefits of EnvHub
344
+
345
+ ### For Environment Authors
346
+
347
+ - **Easy distribution**: No PyPI packaging required
348
+ - **Version control**: Use Git for environment versioning
349
+ - **Rapid iteration**: Push updates instantly
350
+ - **Documentation**: Hub README renders beautifully
351
+ - **Community**: Reach LeRobot users directly
352
+
353
+ ### For Researchers
354
+
355
+ - **Quick experiments**: Load any environment in one line
356
+ - **Reproducibility**: Pin to specific commits
357
+ - **Discovery**: Browse environments on the Hub
358
+ - **No conflicts**: No need to install conflicting packages
359
+
360
+ ### For the Community
361
+
362
+ - **Growing ecosystem**: More diverse simulation tasks
363
+ - **Standardization**: Common `make_env` API
364
+ - **Collaboration**: Fork and improve existing environments
365
+ - **Accessibility**: Lower barrier to sharing research
366
+
367
+ ## Troubleshooting
368
+
369
+ ### "Refusing to execute remote code"
370
+
371
+ You must explicitly pass `trust_remote_code=True`:
372
+
373
+ ```python
374
+ env = make_env("user/repo", trust_remote_code=True)
375
+ ```
376
+
377
+ ### "Module X not found"
378
+
379
+ The hub environment has dependencies you need to install:
380
+
381
+ ```bash
382
+ # Check the repo's requirements.txt and install dependencies
383
+ pip install gymnasium numpy
384
+ ```
385
+
386
+ ### "make_env not found in module"
387
+
388
+ Your `env.py` must expose a `make_env` function:
389
+
390
+ ```python
391
+ def make_env(n_envs: int, use_async_envs: bool):
392
+ # Your implementation
393
+ pass
394
+ ```
395
+
396
+ ### Environment returns wrong type
397
+
398
+ The `make_env` function must return:
399
+
400
+ - A `gym.vector.VectorEnv`, or
401
+ - A single `gym.Env`, or
402
+ - A dict `{suite_name: {task_id: VectorEnv}}`
403
+
404
+ ## Best Practices
405
+
406
+ 1. **Document your environment**: Include observation/action space descriptions, reward structure, and termination conditions in your README
407
+ 2. **Add requirements.txt**: List all dependencies with versions
408
+ 3. **Test thoroughly**: Verify your environment works locally before pushing
409
+ 4. **Use semantic versioning**: Tag releases with version numbers
410
+ 5. **Add examples**: Include usage examples in your README
411
+ 6. **Keep it simple**: Minimize dependencies when possible
412
+ 7. **License your work**: Add a LICENSE file to clarify usage terms
413
+
414
+ ## Future Directions
415
+
416
+ The EnvHub ecosystem enables exciting possibilities:
417
+
418
+ - **GPU-accelerated physics**: Share Isaac Gym or Brax environments
419
+ - **Photorealistic rendering**: Distribute environments with advanced graphics
420
+ - **Multi-agent scenarios**: Complex interaction tasks
421
+ - **Real-world simulators**: Digital twins of physical setups
422
+ - **Procedural generation**: Infinite task variations
423
+ - **Domain randomization**: Pre-configured DR pipelines
424
+
425
+ As more researchers and developers contribute, the diversity and quality of available environments will grow, benefiting the entire robotics learning community.
426
+
427
+ ## See Also
428
+
429
+ - [Hugging Face Hub Documentation](https://huggingface.co/docs/hub/en/index)
430
+ - [Gymnasium Documentation](https://gymnasium.farama.org/index.html)
431
+ - [Example Hub Environment](https://huggingface.co/lerobot/cartpole-env)
docs/source/envhub_isaaclab_arena.mdx ADDED
@@ -0,0 +1,510 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NVIDIA IsaacLab Arena & LeRobot
2
+
3
+ LeRobot EnvHub now supports **GPU-accelerated simulation** with IsaacLab Arena for policy evaluation at scale.
4
+ Train and evaluate imitation learning policies with high-fidelity simulation — all integrated into the LeRobot ecosystem.
5
+
6
+ <img
7
+ src="https://huggingface.co/nvidia/isaaclab-arena-envs/resolve/main/assets/Gr1OpenMicrowaveEnvironment.png"
8
+ alt="IsaacLab Arena - GR1 Microwave Environment"
9
+ style={{ maxWidth: "100%", borderRadius: "8px", marginBottom: "1rem" }}
10
+ />
11
+
12
+ [IsaacLab Arena](https://github.com/isaac-sim/IsaacLab-Arena) integrates with NVIDIA IsaacLab to provide:
13
+
14
+ - 🤖 **Humanoid embodiments**: GR1, G1, Galileo with various configurations
15
+ - 🎯 **Manipulation & loco-manipulation tasks**: Door opening, pick-and-place, button pressing, and more
16
+ - ⚡ **GPU-accelerated rollouts**: Parallel environment execution on NVIDIA GPUs
17
+ - 🖼️ **RTX Rendering**: Evaluate vision-based policies with realistic rendering, reflections and refractions
18
+ - 📦 **LeRobot-compatible datasets**: Ready for training with GR00T N1x, PI0, SmolVLA, ACT, and Diffusion policies
19
+ - 🔄 **EnvHub integration**: Load environments from HuggingFace EnvHub with one line
20
+
21
+ ## Installation
22
+
23
+ ### Prerequisites
24
+
25
+ Hardware requirements are shared with Isaac Sim, and are detailed in [Isaac Sim Requirements](https://docs.isaacsim.omniverse.nvidia.com/5.1.0/installation/requirements.html).
26
+
27
+ - NVIDIA GPU with CUDA support
28
+ - NVIDIA driver compatible with IsaacSim 5.1.0
29
+ - Linux (Ubuntu 22.04 / 24.04)
30
+
31
+ ### Setup
32
+
33
+ ```bash
34
+ # 1. Create conda environment
35
+ conda create -y -n lerobot-arena python=3.11
36
+ conda activate lerobot-arena
37
+ conda install -y -c conda-forge ffmpeg=7.1.1
38
+
39
+ # 2. Install Isaac Sim 5.1.0
40
+ pip install "isaacsim[all,extscache]==5.1.0" --extra-index-url https://pypi.nvidia.com
41
+
42
+ # Accept NVIDIA EULA (required)
43
+ export ACCEPT_EULA=Y
44
+ export PRIVACY_CONSENT=Y
45
+
46
+ # 3. Install IsaacLab 2.3.0
47
+ git clone https://github.com/isaac-sim/IsaacLab.git
48
+ cd IsaacLab
49
+ git checkout v2.3.0
50
+ ./isaaclab.sh -i
51
+ cd ..
52
+
53
+ # 4. Install IsaacLab Arena
54
+ git clone https://github.com/isaac-sim/IsaacLab-Arena.git
55
+ cd IsaacLab-Arena
56
+ git checkout release/0.1.1
57
+ pip install -e .
58
+ cd ..
59
+
60
+
61
+ # 5. Install LeRobot
62
+ git clone https://github.com/huggingface/lerobot.git
63
+ cd lerobot
64
+ pip install -e .
65
+ cd ..
66
+
67
+
68
+ # 6. Install additional dependencies
69
+ pip install onnxruntime==1.23.2 lightwheel-sdk==1.0.1 vuer[all]==0.0.70 qpsolvers==4.8.1
70
+ pip install numpy==1.26.0 # Isaac Sim 5.1 depends on numpy==1.26.0, this will be fixed in next release
71
+ ```
72
+
73
+ ## Evaluating Policies
74
+
75
+ ### Pre-trained Policies
76
+
77
+ The following trained policies are available:
78
+
79
+ | Policy | Architecture | Task | Link |
80
+ | :-------------------------- | :----------- | :------------ | :----------------------------------------------------------------------- |
81
+ | pi05-arena-gr1-microwave | PI0.5 | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/pi05-arena-gr1-microwave) |
82
+ | smolvla-arena-gr1-microwave | SmolVLA | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/smolvla-arena-gr1-microwave) |
83
+
84
+ ### Evaluate SmolVLA
85
+
86
+ ```bash
87
+ pip install -e ".[smolvla]"
88
+ pip install numpy==1.26.0 # revert numpy to version 1.26
89
+ ```
90
+
91
+ ```bash
92
+ lerobot-eval \
93
+ --policy.path=nvidia/smolvla-arena-gr1-microwave \
94
+ --env.type=isaaclab_arena \
95
+ --env.hub_path=nvidia/isaaclab-arena-envs \
96
+ --rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \
97
+ --policy.device=cuda \
98
+ --env.environment=gr1_microwave \
99
+ --env.embodiment=gr1_pink \
100
+ --env.object=mustard_bottle \
101
+ --env.headless=false \
102
+ --env.enable_cameras=true \
103
+ --env.video=true \
104
+ --env.video_length=10 \
105
+ --env.video_interval=15 \
106
+ --env.state_keys=robot_joint_pos \
107
+ --env.camera_keys=robot_pov_cam_rgb \
108
+ --trust_remote_code=True \
109
+ --eval.batch_size=1
110
+ ```
111
+
112
+ ### Evaluate PI0.5
113
+
114
+ ```bash
115
+ pip install -e ".[pi]"
116
+ pip install numpy==1.26.0 # revert numpy to version 1.26
117
+ ```
118
+
119
+ <Tip>PI0.5 requires disabling torch compile for evaluation:</Tip>
120
+
121
+ ```bash
122
+ TORCH_COMPILE_DISABLE=1 TORCHINDUCTOR_DISABLE=1 lerobot-eval \
123
+ --policy.path=nvidia/pi05-arena-gr1-microwave \
124
+ --env.type=isaaclab_arena \
125
+ --env.hub_path=nvidia/isaaclab-arena-envs \
126
+ --rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \
127
+ --policy.device=cuda \
128
+ --env.environment=gr1_microwave \
129
+ --env.embodiment=gr1_pink \
130
+ --env.object=mustard_bottle \
131
+ --env.headless=false \
132
+ --env.enable_cameras=true \
133
+ --env.video=true \
134
+ --env.video_length=15 \
135
+ --env.video_interval=15 \
136
+ --env.state_keys=robot_joint_pos \
137
+ --env.camera_keys=robot_pov_cam_rgb \
138
+ --trust_remote_code=True \
139
+ --eval.batch_size=1
140
+ ```
141
+
142
+ <Tip>
143
+ To change the number of parallel environments, use the ```--eval.batch_size```
144
+ flag.
145
+ </Tip>
146
+
147
+ ### What to Expect
148
+
149
+ During evaluation, you will see a progress bar showing the running success rate:
150
+
151
+ ```
152
+ Stepping through eval batches: 8%|██████▍ | 4/50 [00:45<08:06, 10.58s/it, running_success_rate=25.0%]
153
+ ```
154
+
155
+ ### Video Recording
156
+
157
+ To enable video recording during evaluation, add the following flags to your command:
158
+
159
+ ```bash
160
+ --env.video=true \
161
+ --env.video_length=15 \
162
+ --env.video_interval=15
163
+ ```
164
+
165
+ For more details on video recording, see the [IsaacLab Recording Documentation](https://isaac-sim.github.io/IsaacLab/main/source/how-to/record_video.html).
166
+
167
+ <Tip>
168
+ When running headless with `--env.headless=true`, you must also enable cameras explicitly for camera enabled environments:
169
+
170
+ ```bash
171
+ --env.headless=true --env.enable_cameras=true
172
+ ```
173
+
174
+ </Tip>
175
+
176
+ ### Output Directory
177
+
178
+ Evaluation videos are saved to the output directory with the following structure:
179
+
180
+ ```
181
+ outputs/eval/<date>/<timestamp>_<env>_<policy>/videos/<task>_<env_id>/eval_episode_<n>.mp4
182
+ ```
183
+
184
+ For example:
185
+
186
+ ```
187
+ outputs/eval/2026-01-02/14-38-01_isaaclab_arena_smolvla/videos/gr1_microwave_0/eval_episode_0.mp4
188
+ ```
189
+
190
+ ## Training Policies
191
+
192
+ To learn more about training policies with LeRobot, please refer to the training documentation:
193
+
194
+ - [SmolVLA](./smolvla)
195
+ - [Pi0.5](./pi05)
196
+ - [GR00T N1.5](./groot)
197
+
198
+ Sample IsaacLab Arena datasets are available on HuggingFace Hub for experimentation:
199
+
200
+ | Dataset | Description | Frames |
201
+ | :-------------------------------------------------------------------------------------------------------- | :------------------------- | :----- |
202
+ | [Arena-GR1-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-GR1-Manipulation-Task-v3) | GR1 microwave manipulation | ~4K |
203
+ | [Arena-G1-Loco-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-G1-Loco-Manipulation-Task) | G1 loco-manipulation | ~4K |
204
+
205
+ ## Environment Configuration
206
+
207
+ ### Full Configuration Options
208
+
209
+ ```python
210
+ from lerobot.envs.configs import IsaaclabArenaEnv
211
+
212
+ config = IsaaclabArenaEnv(
213
+ # Environment selection
214
+ environment="gr1_microwave", # Task environment
215
+ embodiment="gr1_pink", # Robot embodiment
216
+ object="power_drill", # Object to manipulate
217
+
218
+ # Simulation settings
219
+ episode_length=300, # Max steps per episode
220
+ headless=True, # Run without GUI
221
+ device="cuda:0", # GPU device
222
+ seed=42, # Random seed
223
+
224
+ # Observation configuration
225
+ state_keys="robot_joint_pos", # State observation keys (comma-separated)
226
+ camera_keys="robot_pov_cam_rgb", # Camera observation keys (comma-separated)
227
+ state_dim=54, # Expected state dimension
228
+ action_dim=36, # Expected action dimension
229
+ camera_height=512, # Camera image height
230
+ camera_width=512, # Camera image width
231
+ enable_cameras=True, # Enable camera observations
232
+
233
+ # Video recording
234
+ video=False, # Enable video recording
235
+ video_length=100, # Frames per video
236
+ video_interval=200, # Steps between recordings
237
+
238
+ # Advanced
239
+ mimic=False, # Enable mimic mode
240
+ teleop_device=None, # Teleoperation device
241
+ disable_fabric=False, # Disable fabric optimization
242
+ enable_pinocchio=True, # Enable Pinocchio for IK
243
+ )
244
+ ```
245
+
246
+ ### Using Environment Hub directly for advanced usage
247
+
248
+ Create a file called `test_env_load_arena.py` or [download from the EnvHub](https://huggingface.co/nvidia/isaaclab-arena-envs/blob/main/tests/test_env_load_arena.py):
249
+
250
+ ```python
251
+ import logging
252
+ from dataclasses import asdict
253
+ from pprint import pformat
254
+ import torch
255
+ import tqdm
256
+ from lerobot.configs import parser
257
+ from lerobot.configs.eval import EvalPipelineConfig
258
+
259
+
260
+ @parser.wrap()
261
+ def main(cfg: EvalPipelineConfig):
262
+ """Run random action rollout for IsaacLab Arena environment."""
263
+ logging.info(pformat(asdict(cfg)))
264
+
265
+ from lerobot.envs.factory import make_env
266
+
267
+ env_dict = make_env(
268
+ cfg.env,
269
+ n_envs=cfg.env.num_envs,
270
+ trust_remote_code=True,
271
+ )
272
+ env = next(iter(env_dict.values()))[0]
273
+ env.reset()
274
+ for _ in tqdm.tqdm(range(cfg.env.episode_length)):
275
+ with torch.inference_mode():
276
+ actions = env.action_space.sample()
277
+ obs, rewards, terminated, truncated, info = env.step(actions)
278
+ if terminated.any() or truncated.any():
279
+ obs, info = env.reset()
280
+ env.close()
281
+
282
+
283
+ if __name__ == "__main__":
284
+ main()
285
+ ```
286
+
287
+ Run with:
288
+
289
+ ```bash
290
+ python test_env_load_arena.py \
291
+ --env.environment=g1_locomanip_pnp \
292
+ --env.embodiment=gr1_pink \
293
+ --env.object=cracker_box \
294
+ --env.num_envs=4 \
295
+ --env.enable_cameras=true \
296
+ --env.seed=1000 \
297
+ --env.video=true \
298
+ --env.video_length=10 \
299
+ --env.video_interval=15 \
300
+ --env.headless=false \
301
+ --env.hub_path=nvidia/isaaclab-arena-envs \
302
+ --env.type=isaaclab_arena
303
+ ```
304
+
305
+ ## Creating New Environments
306
+
307
+ First create a new IsaacLab Arena environment by following the [IsaacLab Arena Documentation](https://isaac-sim.github.io/IsaacLab-Arena/release/0.1.1/index.html).
308
+
309
+ Clone our EnvHub repo:
310
+
311
+ ```bash
312
+ git clone https://huggingface.co/nvidia/isaaclab-arena-envs
313
+ ```
314
+
315
+ Modify the `example_envs.yaml` file based on your new environment.
316
+ [Upload](./envhub#step-3-upload-to-the-hub) your modified repo to HuggingFace EnvHub.
317
+
318
+ <Tip>
319
+ Your IsaacLab Arena environment code must be locally available during
320
+ evaluation. Users can clone your environment repository separately, or you can
321
+ bundle the environment code and assets directly in your EnvHub repo.
322
+ </Tip>
323
+
324
+ Then, when evaluating, use your new environment:
325
+
326
+ ```bash
327
+ lerobot-eval \
328
+ --env.hub_path=<your-env-hub-path>/isaaclab-arena-envs \
329
+ --env.environment=<your new environment> \
330
+ ...other flags...
331
+ ```
332
+
333
+ We look forward to your contributions!
334
+
335
+ ## Troubleshooting
336
+
337
+ ### CUDA out of memory
338
+
339
+ Reduce `batch_size` or use a GPU with more VRAM:
340
+
341
+ ```bash
342
+ --eval.batch_size=1
343
+ ```
344
+
345
+ ### EULA not accepted
346
+
347
+ Set environment variables before running:
348
+
349
+ ```bash
350
+ export ACCEPT_EULA=Y
351
+ export PRIVACY_CONSENT=Y
352
+ ```
353
+
354
+ ### Video recording not working
355
+
356
+ Enable cameras when running headless:
357
+
358
+ ```bash
359
+ --env.video=true --env.enable_cameras=true --env.headless=true
360
+ ```
361
+
362
+ ### Policy output dimension mismatch
363
+
364
+ Ensure `action_dim` matches your policy:
365
+
366
+ ```bash
367
+ --env.action_dim=36
368
+ ```
369
+
370
+ ### libGLU.so.1 Errors during Isaac Sim initialization
371
+
372
+ Ensure you have the following dependencies installed, this is likely to happen on headless machines.
373
+
374
+ ```bash
375
+ sudo apt update && sudo apt install -y libglu1-mesa libxt6
376
+ ```
377
+
378
+ ## See Also
379
+
380
+ - [EnvHub Documentation](./envhub.mdx) - General EnvHub usage
381
+ - [IsaacLab Arena GitHub](https://github.com/isaac-sim/IsaacLab-Arena)
382
+ - [IsaacLab Documentation](https://isaac-sim.github.io/IsaacLab/)
383
+
384
+ ## Lightwheel LW-BenchHub
385
+
386
+ [Lightwheel](https://www.lightwheel.ai) is bringing `Lightwheel-Libero-Tasks` and `Lightwheel-RoboCasa-Tasks` with 268 tasks to the LeRobot ecosystem.
387
+ LW-BenchHub collects and generates large-scale datasets via teleoperation that comply with the LeRobot specification, enabling out-of-the-box training and evaluation workflows.
388
+ With the unified interface provided by EnvHub, developers can quickly build end-to-end experimental pipelines.
389
+
390
+ ### Install
391
+
392
+ Assuming you followed the [Installation](#installation) steps, you can install LW-BenchHub with:
393
+
394
+ ```bash
395
+ conda install pinocchio -c conda-forge -y
396
+ pip install numpy==1.26.0 # revert numpy to version 1.26
397
+
398
+ sudo apt-get install git-lfs && git lfs install
399
+
400
+ git clone https://github.com/LightwheelAI/lw_benchhub
401
+ git lfs pull # Ensure LFS files (e.g., .usd assets) are downloaded
402
+
403
+ cd lw_benchhub
404
+ pip install -e .
405
+ ```
406
+
407
+ For more detailed instructions, please refer to the [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/usage/Installation).
408
+
409
+ ### Lightwheel Tasks Dataset
410
+
411
+ LW-BenchHub datasets are available on HuggingFace Hub:
412
+
413
+ | Dataset | Description | Tasks | Frames |
414
+ | :------------------------------------------------------------------------------------------------------------ | :---------------------- | :---- | :----- |
415
+ | [Lightwheel-Tasks-X7S](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-X7S) | X7S LIBERO and RoboCasa | 117 | ~10.3M |
416
+ | [Lightwheel-Tasks-Double-Piper](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-Double-Piper) | Double-Piper LIBERO | 130 | ~6.0M |
417
+ | [Lightwheel-Tasks-G1-Controller](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-Controller) | G1-Controller LIBERO | 62 | ~2.7M |
418
+ | [Lightwheel-Tasks-G1-WBC](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-WBC) | G1-WBC RoboCasa | 32 | ~1.5M |
419
+
420
+ For training policies, refer to the [Training Policies](#training-policies) section.
421
+
422
+ ### Evaluating Policies
423
+
424
+ #### Pre-trained Policies
425
+
426
+ The following trained policies are available:
427
+
428
+ | Policy | Architecture | Task | Layout | Robot | Link |
429
+ | :----------------------- | :----------- | :----------------------------- | :--------- | :-------------- | :------------------------------------------------------------------------------------ |
430
+ | smolvla-double-piper-pnp | SmolVLA | L90K1PutTheBlackBowlOnThePlate | libero-1-1 | DoublePiper-Abs | [HuggingFace](https://huggingface.co/LightwheelAI/smolvla-double-piper-pnp/tree/main) |
431
+
432
+ #### Evaluate SmolVLA
433
+
434
+ ```bash
435
+ lerobot-eval \
436
+ --policy.path=LightwheelAI/smolvla-double-piper-pnp \
437
+ --env.type=isaaclab_arena \
438
+ --rename_map='{"observation.images.left_hand_camera_rgb": "observation.images.left_hand", "observation.images.right_hand_camera_rgb": "observation.images.right_hand", "observation.images.first_person_camera_rgb": "observation.images.first_person"}' \
439
+ --env.hub_path=LightwheelAI/lw_benchhub_env \
440
+ --env.kwargs='{"config_path": "configs/envhub/example.yml"}' \
441
+ --trust_remote_code=true \
442
+ --env.state_keys=joint_pos \
443
+ --env.action_dim=12 \
444
+ --env.camera_keys=left_hand_camera_rgb,right_hand_camera_rgb,first_person_camera_rgb \
445
+ --policy.device=cuda \
446
+ --eval.batch_size=10 \
447
+ --eval.n_episodes=100
448
+ ```
449
+
450
+ ### Environment Configuration
451
+
452
+ Evaluation can be quickly launched by modifying the `robot`, `task`, and `layout` settings in the configuration file.
453
+
454
+ #### Full Configuration Options
455
+
456
+ ```yml
457
+ # =========================
458
+ # Basic Settings
459
+ # =========================
460
+ disable_fabric: false
461
+ device: cuda:0
462
+ sensitivity: 1.0
463
+ step_hz: 50
464
+ enable_cameras: true
465
+ execute_mode: eval
466
+ episode_length_s: 20.0 # Episode length in seconds, increase if episodes timeout during eval
467
+
468
+ # =========================
469
+ # Robot Settings
470
+ # =========================
471
+ robot: DoublePiper-Abs # Robot type, DoublePiper-Abs, X7S-Abs, G1-Controller or G1-Controller-DecoupledWBC
472
+ robot_scale: 1.0
473
+
474
+ # =========================
475
+ # Task & Scene Settings
476
+ # =========================
477
+ task: L90K1PutTheBlackBowlOnThePlate # Task name
478
+ scene_backend: robocasa
479
+ task_backend: robocasa
480
+ debug_assets: null
481
+ layout: libero-1-1 # Layout and style ID
482
+ sources:
483
+ - objaverse
484
+ - lightwheel
485
+ - aigen_objs
486
+ object_projects: []
487
+ usd_simplify: false
488
+ seed: 42
489
+
490
+ # =========================
491
+ # Object Placement Retry Settings
492
+ # =========================
493
+ max_scene_retry: 4
494
+ max_object_placement_retry: 3
495
+
496
+ resample_objects_placement_on_reset: true
497
+ resample_robot_placement_on_reset: true
498
+
499
+ # =========================
500
+ # Replay Configuration Settings
501
+ # =========================
502
+ replay_cfgs:
503
+ add_camera_to_observation: true
504
+ render_resolution: [640, 480]
505
+ ```
506
+
507
+ ### See Also
508
+
509
+ - [LW-BenchHub GitHub](https://github.com/LightwheelAI/LW-BenchHub)
510
+ - [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/)
docs/source/envhub_leisaac.mdx ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LeIsaac × LeRobot EnvHub
2
+
3
+ LeRobot EnvHub now supports **imitation learning in simulation** with LeIsaac.
4
+ Spin up everyday manipulation tasks, teleoperate the robot, collect demos, push them to the Hub, and train policies in LeRobot — all in one loop.
5
+
6
+ [LeIsaac](https://github.com/LightwheelAI/leisaac) integrates with IsaacLab and the SO101 Leader/Follower setup to provide:
7
+
8
+ - 🕹️ **Teleoperation-first workflows** for data collection
9
+ - 📦 **Built-in data conversion** ready for LeRobot training
10
+ - 🤖 **Everyday skills** like picking oranges, lifting cubes, cleaning tables, and folding cloth
11
+ - ☁️ **Ongoing upgrades** from [LightWheel](https://lightwheel.ai/): cloud simulation, EnvHub support, Sim2Real tooling, and more
12
+
13
+ Below you’ll find the currently supported LeIsaac tasks exposed through LeRobot EnvHub.
14
+
15
+ # Available Environments
16
+
17
+ The following table lists all available tasks and environments in LeIsaac x LeRobot Envhub. You can also get the latest list of environments by running the following command:
18
+
19
+ ```bash
20
+ python scripts/environments/list_envs.py
21
+ ```
22
+
23
+ | Task | Environment ID | Task Description | Related Robot |
24
+ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------- |
25
+ | <video src="https://github.com/user-attachments/assets/466eddff-f720-4f99-94d5-5e123e4c302c" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-PickOrange-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/pick_orange_env_cfg.py)<br /><br />[LeIsaac-SO101-PickOrange-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/direct/pick_orange_env.py) | Pick three oranges and put them into the plate, then reset the arm to rest state. | Single-Arm SO101 Follower |
26
+ | <video src="https://github.com/user-attachments/assets/1e4eb83a-0b38-40fb-a0b2-ddb0fe201e6d" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-LiftCube-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/lift_cube_env_cfg.py)<br /><br />[LeIsaac-SO101-LiftCube-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/direct/lift_cube_env.py) | Lift the red cube up. | Single-Arm SO101 Follower |
27
+ | <video src="https://github.com/user-attachments/assets/e49d8f1c-dcc9-412b-a88f-100680d8a45b" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-CleanToyTable-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/clean_toy_table_env_cfg.py)<br /><br />[LeIsaac-SO101-CleanToyTable-BiArm-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/clean_toy_table_bi_arm_env_cfg.py)<br /><br />[LeIsaac-SO101-CleanToyTable-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/direct/clean_toy_table_bi_arm_env.py) | Pick two letter e objects into the box, and reset the arm to rest state. | Single-Arm SO101 Follower<br /><br />Bi-Arm SO101 Follower |
28
+ | <video src="https://github.com/user-attachments/assets/e29a0f8a-9286-4ce6-b45d-342c3d3ba754" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-FoldCloth-BiArm-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/fold_cloth/fold_cloth_bi_arm_env_cfg.py)<br /><br />[LeIsaac-SO101-FoldCloth-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/fold_cloth/direct/fold_cloth_bi_arm_env.py) | Fold the cloth, and reset the arm to rest state.<br /><br />_Note: Only the DirectEnv support check_success in this task._ | Bi-Arm SO101 Follower |
29
+
30
+ # Load LeIsaac directly in LeRobot with one line of code
31
+
32
+ > EnvHub: Share LeIsaac environments through HuggingFace
33
+
34
+ [EnvHub](https://huggingface.co/docs/lerobot/envhub) is our reproducible environment hub, spin up a packaged simulation with one line, experiment immediately, and publish your own tasks for the community.
35
+
36
+ LeIsaac offers EnvHub support so you can consume or share tasks with only a few commands.
37
+
38
+ <video
39
+ controls
40
+ src="https://github.com/user-attachments/assets/687666f5-ebe0-421d-84a0-eb86116ac5f8"
41
+ style={{ width: "100%", maxWidth: "960px", borderRadius: "8px" }}
42
+ />
43
+
44
+ ## How to get started, environment Setup
45
+
46
+ Run the following commands to setup your code environments:
47
+
48
+ ```bash
49
+ # Refer to Getting Started/Installation to install leisaac firstly
50
+ conda create -n leisaac_envhub python=3.11
51
+ conda activate leisaac_envhub
52
+
53
+ conda install -c "nvidia/label/cuda-12.8.1" cuda-toolkit
54
+ pip install -U torch==2.7.0 torchvision==0.22.0 --index-url https://download.pytorch.org/whl/cu128
55
+ pip install 'leisaac[isaaclab] @ git+https://github.com/LightwheelAI/leisaac.git#subdirectory=source/leisaac' --extra-index-url https://pypi.nvidia.com
56
+
57
+ # Install lerobot
58
+ pip install lerobot==0.4.1
59
+
60
+ # Fix numpy version
61
+ pip install numpy==1.26.0
62
+ ```
63
+
64
+ ## Usage Example
65
+
66
+ EnvHub exposes every LeIsaac-supported task in a uniform interface. The examples below load `so101_pick_orange` and demonstrate a random-action rollout and an interactive teleoperation.
67
+
68
+ ### Random Action
69
+
70
+ <details>
71
+ <summary>Click to expand code example</summary>
72
+
73
+ ```python
74
+ # envhub_random_action.py
75
+
76
+ import torch
77
+ from lerobot.envs.factory import make_env
78
+
79
+ # Load from the hub
80
+ envs_dict = make_env("LightwheelAI/leisaac_env:envs/so101_pick_orange.py", n_envs=1, trust_remote_code=True)
81
+
82
+ # Access the environment
83
+ suite_name = next(iter(envs_dict))
84
+ sync_vector_env = envs_dict[suite_name][0]
85
+ # retrieve the isaac environment from the sync vector env
86
+ env = sync_vector_env.envs[0].unwrapped
87
+
88
+ # Use it like any gym environment
89
+ obs, info = env.reset()
90
+
91
+ while True:
92
+ action = torch.tensor(env.action_space.sample())
93
+ obs, reward, terminated, truncated, info = env.step(action)
94
+ if terminated or truncated:
95
+ obs, info = env.reset()
96
+
97
+ env.close()
98
+ ```
99
+
100
+ </details>
101
+
102
+ ```bash
103
+ python envhub_random_action.py
104
+ ```
105
+
106
+ You should see the SO101 arm swinging under purely random commands.
107
+
108
+ ### Teleoperation
109
+
110
+ LeRobot’s teleoperation stack can drive the simulated arm.
111
+
112
+ Connect the SO101 Leader controller, run the calibration command below.
113
+
114
+ ```bash
115
+ lerobot-calibrate \
116
+ --teleop.type=so101_leader \
117
+ --teleop.port=/dev/ttyACM0 \
118
+ --teleop.id=leader
119
+ ```
120
+
121
+ And then launch the teleop script.
122
+
123
+ <details>
124
+ <summary>Click to expand code example</summary>
125
+
126
+ ```python
127
+ # envhub_teleop_example.py
128
+
129
+ import logging
130
+ import time
131
+ import gymnasium as gym
132
+
133
+ from dataclasses import asdict, dataclass
134
+ from pprint import pformat
135
+
136
+ from lerobot.teleoperators import ( # noqa: F401
137
+ Teleoperator,
138
+ TeleoperatorConfig,
139
+ make_teleoperator_from_config,
140
+ so_leader,
141
+ bi_so_leader,
142
+ )
143
+ from lerobot.utils.robot_utils import precise_sleep
144
+ from lerobot.utils.utils import init_logging
145
+ from lerobot.envs.factory import make_env
146
+
147
+
148
+ @dataclass
149
+ class TeleoperateConfig:
150
+ teleop: TeleoperatorConfig
151
+ env_name: str = "so101_pick_orange"
152
+ fps: int = 60
153
+
154
+
155
+ @dataclass
156
+ class EnvWrap:
157
+ env: gym.Env
158
+
159
+
160
+ def make_env_from_leisaac(env_name: str = "so101_pick_orange"):
161
+ envs_dict = make_env(
162
+ f'LightwheelAI/leisaac_env:envs/{env_name}.py',
163
+ n_envs=1,
164
+ trust_remote_code=True
165
+ )
166
+ suite_name = next(iter(envs_dict))
167
+ sync_vector_env = envs_dict[suite_name][0]
168
+ env = sync_vector_env.envs[0].unwrapped
169
+
170
+ return env
171
+
172
+
173
+ def teleop_loop(teleop: Teleoperator, env: gym.Env, fps: int):
174
+ from leisaac.devices.action_process import preprocess_device_action
175
+ from leisaac.assets.robots.lerobot import SO101_FOLLOWER_MOTOR_LIMITS
176
+ from leisaac.utils.env_utils import dynamic_reset_gripper_effort_limit_sim
177
+
178
+ env_wrap = EnvWrap(env=env)
179
+
180
+ obs, info = env.reset()
181
+ while True:
182
+ loop_start = time.perf_counter()
183
+ if env.cfg.dynamic_reset_gripper_effort_limit:
184
+ dynamic_reset_gripper_effort_limit_sim(env, 'so101leader')
185
+
186
+ raw_action = teleop.get_action()
187
+ processed_action = preprocess_device_action(
188
+ dict(
189
+ so101_leader=True,
190
+ joint_state={
191
+ k.removesuffix(".pos"): v for k, v in raw_action.items()},
192
+ motor_limits=SO101_FOLLOWER_MOTOR_LIMITS),
193
+ env_wrap
194
+ )
195
+ obs, reward, terminated, truncated, info = env.step(processed_action)
196
+ if terminated or truncated:
197
+ obs, info = env.reset()
198
+
199
+ dt_s = time.perf_counter() - loop_start
200
+ precise_sleep(max(1 / fps - dt_s, 0.0))
201
+ loop_s = time.perf_counter() - loop_start
202
+ print(f"\ntime: {loop_s * 1e3:.2f}ms ({1 / loop_s:.0f} Hz)")
203
+
204
+
205
+ def teleoperate(cfg: TeleoperateConfig):
206
+ init_logging()
207
+ logging.info(pformat(asdict(cfg)))
208
+
209
+ teleop = make_teleoperator_from_config(cfg.teleop)
210
+ env = make_env_from_leisaac(cfg.env_name)
211
+
212
+ teleop.connect()
213
+ if hasattr(env, 'initialize'):
214
+ env.initialize()
215
+ try:
216
+ teleop_loop(teleop=teleop, env=env, fps=cfg.fps)
217
+ except KeyboardInterrupt:
218
+ pass
219
+ finally:
220
+ teleop.disconnect()
221
+ env.close()
222
+
223
+
224
+ def main():
225
+ teleoperate(TeleoperateConfig(
226
+ teleop=so_leader.SO101LeaderConfig(
227
+ port="/dev/ttyACM0",
228
+ id='leader',
229
+ use_degrees=False,
230
+ ),
231
+ env_name="so101_pick_orange",
232
+ fps=60,
233
+ ))
234
+
235
+
236
+ if __name__ == "__main__":
237
+ main()
238
+
239
+ ```
240
+
241
+ </details>
242
+
243
+ ```bash
244
+ python envhub_teleop_example.py
245
+ ```
246
+
247
+ Running the script lets you operate the simulated arm using the physical Leader device.
248
+
249
+ ## ☁️ Cloud Simulation (No GPU Required)
250
+
251
+ Don’t have a local GPU or the right drivers? No problem! You can run LeIsaac entirely in the cloud with zero setup.
252
+ LeIsaac works out-of-the-box on **NVIDIA Brev**, giving you a fully configured environment directly in your browser.
253
+
254
+ 👉 **Start here:** [https://lightwheelai.github.io/leisaac/docs/cloud_simulation/nvidia_brev](https://lightwheelai.github.io/leisaac/docs/cloud_simulation/nvidia_brev)
255
+
256
+ Once your instance is deployed, simply open the link for **port 80 (HTTP)** to launch **Visual Studio Code Server** (default password: `password`). From there, you can run simulations, edit code, and visualize IsaacLab environments — all from your web browser.
257
+
258
+ **No GPU, no drivers, no local installation. Just click and run.**
259
+
260
+ ## Additional Notes
261
+
262
+ We keep EnvHub coverage aligned with the LeIsaac task. Currently supported:
263
+
264
+ - `so101_pick_orange`
265
+ - `so101_lift_cube`
266
+ - `so101_clean_toytable`
267
+ - `bi_so101_fold_cloth`
268
+
269
+ Switch tasks by targeting a different script when calling `make_env`, for example:
270
+
271
+ ```python
272
+ envs_dict_pick_orange = make_env("LightwheelAI/leisaac_env:envs/so101_pick_orange.py", n_envs=1, trust_remote_code=True)
273
+ envs_dict_lift_cube = make_env("LightwheelAI/leisaac_env:envs/so101_lift_cube.py", n_envs=1, trust_remote_code=True)
274
+ envs_dict_clean_toytable = make_env("LightwheelAI/leisaac_env:envs/so101_clean_toytable.py", n_envs=1, trust_remote_code=True)
275
+ envs_dict_fold_cloth = make_env("LightwheelAI/leisaac_env:envs/bi_so101_fold_cloth.py", n_envs=1, trust_remote_code=True)
276
+ ```
277
+
278
+ Note: when working with `bi_so101_fold_cloth`, call `initialize()` immediately after retrieving the env before performing any other operations:
279
+
280
+ <details>
281
+ <summary>Click to expand code example</summary>
282
+
283
+ ```python
284
+ import torch
285
+ from lerobot.envs.factory import make_env
286
+
287
+ # Load from the hub
288
+ envs_dict = make_env("LightwheelAI/leisaac_env:envs/bi_so101_fold_cloth.py", n_envs=1, trust_remote_code=True)
289
+
290
+ # Access the environment
291
+ suite_name = next(iter(envs_dict))
292
+ sync_vector_env = envs_dict[suite_name][0]
293
+ # retrieve the isaac environment from the sync vector env
294
+ env = sync_vector_env.envs[0].unwrapped
295
+
296
+ # NOTE: initialize() first
297
+ env.initialize()
298
+
299
+ # other operation with env...
300
+ ```
301
+
302
+ </details>
docs/source/feetech.mdx ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Feetech Motor Firmware Update
2
+
3
+ This tutorial guides you through updating the firmware of Feetech motors using the official Feetech software.
4
+
5
+ ## Prerequisites
6
+
7
+ - Windows computer (Feetech software is only available for Windows)
8
+ - Feetech motor control board
9
+ - USB cable to connect the control board to your computer
10
+ - Feetech motors connected to the control board
11
+
12
+ ## Step 1: Download Feetech Software
13
+
14
+ 1. Visit the official Feetech software download page: [https://www.feetechrc.com/software.html](https://www.feetechrc.com/software.html)
15
+ 2. Download the latest version of the Feetech debugging software (FD)
16
+ 3. Install the software on your Windows computer
17
+
18
+ ## Step 2: Hardware Setup
19
+
20
+ 1. Connect your Feetech motors to the motor control board
21
+ 2. Connect the motor control board to your Windows computer via USB cable
22
+ 3. Ensure power is supplied to the motors
23
+
24
+ ## Step 3: Configure Connection
25
+
26
+ 1. Launch the Feetech debugging software
27
+ 2. Select the correct COM port from the port dropdown menu
28
+ - If unsure which port to use, check Windows Device Manager under "Ports (COM & LPT)"
29
+ 3. Set the appropriate baud rate (typically 1000000 for most Feetech motors)
30
+ 4. Click "Open" to establish communication with the control board
31
+
32
+ ## Step 4: Scan for Motors
33
+
34
+ 1. Once connected, click the "Search" button to detect all connected motors
35
+ 2. The software will automatically discover and list all motors on the bus
36
+ 3. Each motor will appear with its ID number
37
+
38
+ ## Step 5: Update Firmware
39
+
40
+ For each motor you want to update:
41
+
42
+ 1. **Select the motor** from the list by clicking on it
43
+ 2. **Click on Upgrade tab**:
44
+ 3. **Click on Online button**:
45
+ - If an potential firmware update is found, it will be displayed in the box
46
+ 4. **Click on Upgrade button**:
47
+ - The update progress will be displayed
48
+
49
+ ## Step 6: Verify Update
50
+
51
+ 1. After the update completes, the software should automatically refresh the motor information
52
+ 2. Verify that the firmware version has been updated to the expected version
53
+
54
+ ## Important Notes
55
+
56
+ ⚠️ **Warning**: Do not disconnect power or USB during firmware updates, it will potentially brick the motor.
57
+
58
+ ## Bonus: Motor Debugging on Linux/macOS
59
+
60
+ For debugging purposes only, you can use the open-source Feetech Debug Tool:
61
+
62
+ - **Repository**: [FT_SCServo_Debug_Qt](https://github.com/CarolinePascal/FT_SCServo_Debug_Qt/tree/fix/port-search-timer)
63
+
64
+ ### Installation Instructions
65
+
66
+ Follow the instructions in the repository to install the tool, for Ubuntu you can directly install it, for MacOS you need to build it from source.
67
+
68
+ **Limitations:**
69
+
70
+ - This tool is for debugging and parameter adjustment only
71
+ - Firmware updates must still be done on Windows with official Feetech software
docs/source/groot.mdx ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GR00T N1.5 Policy
2
+
3
+ GR00T N1.5 is an open foundation model from NVIDIA designed for generalized humanoid robot reasoning and skills. It is a cross-embodiment model that accepts multimodal input, including language and images, to perform manipulation tasks in diverse environments.
4
+
5
+ This document outlines the specifics of its integration and usage within the LeRobot framework.
6
+
7
+ ## Model Overview
8
+
9
+ NVIDIA Isaac GR00T N1.5 is an upgraded version of the GR00T N1 foundation model. It is built to improve generalization and language-following abilities for humanoid robots.
10
+
11
+ Developers and researchers can post-train GR00T N1.5 with their own real or synthetic data to adapt it for specific humanoid robots or tasks.
12
+
13
+ GR00T N1.5 (specifically the GR00T-N1.5-3B model) is built using pre-trained vision and language encoders. It utilizes a flow matching action transformer to model a chunk of actions, conditioned on vision, language, and proprioception.
14
+
15
+ <img
16
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-groot-paper1%20(1).png"
17
+ alt="An overview of GR00T"
18
+ width="80%"
19
+ />
20
+
21
+ Its strong performance comes from being trained on an expansive and diverse humanoid dataset, which includes:
22
+
23
+ - Real captured data from robots.
24
+ - Synthetic data generated using NVIDIA Isaac GR00T Blueprint.
25
+ - Internet-scale video data.
26
+
27
+ This approach allows the model to be highly adaptable through post-training for specific embodiments, tasks, and environments.
28
+
29
+ ## Installation Requirements
30
+
31
+ As of today, GR00T N1.5 requires flash attention for it's internal working.
32
+
33
+ We are working on making this optional, but in the meantime that means that we require an extra installation step and it can only be used in CUDA enabled devices.
34
+
35
+ 1. Following the Environment Setup of our [Installation Guide](./installation). **Attention** don't install `lerobot` in this step.
36
+ 2. Install [Flash Attention](https://github.com/Dao-AILab/flash-attention) by running:
37
+
38
+ ```bash
39
+ # Check https://pytorch.org/get-started/locally/ for your system
40
+ pip install "torch>=2.2.1,<2.8.0" "torchvision>=0.21.0,<0.23.0" # --index-url https://download.pytorch.org/whl/cu1XX
41
+ pip install ninja "packaging>=24.2,<26.0" # flash attention dependencies
42
+ pip install "flash-attn>=2.5.9,<3.0.0" --no-build-isolation
43
+ python -c "import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')"
44
+ ```
45
+
46
+ 3. Install LeRobot by running:
47
+
48
+ ```bash
49
+ pip install lerobot[groot]
50
+ ```
51
+
52
+ ## Usage
53
+
54
+ To use GR00T in your LeRobot configuration, specify the policy type as:
55
+
56
+ ```python
57
+ policy.type=groot
58
+ ```
59
+
60
+ ## Training
61
+
62
+ ### Training Command Example
63
+
64
+ Here's a complete training command for finetuning the base GR00T model on your own dataset:
65
+
66
+ ```bash
67
+ # Using a multi-GPU setup
68
+ accelerate launch \
69
+ --multi_gpu \
70
+ --num_processes=$NUM_GPUS \
71
+ $(which lerobot-train) \
72
+ --output_dir=$OUTPUT_DIR \
73
+ --save_checkpoint=true \
74
+ --batch_size=$BATCH_SIZE \
75
+ --steps=$NUM_STEPS \
76
+ --save_freq=$SAVE_FREQ \
77
+ --log_freq=$LOG_FREQ \
78
+ --policy.push_to_hub=true \
79
+ --policy.type=groot \
80
+ --policy.repo_id=$REPO_ID \
81
+ --policy.tune_diffusion_model=false \
82
+ --dataset.repo_id=$DATASET_ID \
83
+ --wandb.enable=true \
84
+ --wandb.disable_artifact=true \
85
+ --job_name=$JOB_NAME
86
+ ```
87
+
88
+ ## Performance Results
89
+
90
+ ### Libero Benchmark Results
91
+
92
+ > [!NOTE]
93
+ > Follow our instructions for Libero usage: [Libero](./libero)
94
+
95
+ GR00T has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the GR00T N1.5 model for 30k steps on the Libero dataset and compared the results to the GR00T reference results.
96
+
97
+ | Benchmark | LeRobot Implementation | GR00T Reference |
98
+ | ------------------ | ---------------------- | --------------- |
99
+ | **Libero Spatial** | 82.0% | 92.0% |
100
+ | **Libero Object** | 99.0% | 92.0% |
101
+ | **Libero Long** | 82.0% | 76.0% |
102
+ | **Average** | 87.0% | 87.0% |
103
+
104
+ These results demonstrate GR00T's strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
105
+
106
+ ### Evaluate in your hardware setup
107
+
108
+ Once you have trained your model using your parameters you can run inference in your downstream task. Follow the instructions in [Imitation Learning for Robots](./il_robots). For example:
109
+
110
+ ```bash
111
+ lerobot-record \
112
+ --robot.type=bi_so_follower \
113
+ --robot.left_arm_port=/dev/ttyACM1 \
114
+ --robot.right_arm_port=/dev/ttyACM0 \
115
+ --robot.id=bimanual_follower \
116
+ --robot.cameras='{ right: {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30},
117
+ left: {"type": "opencv", "index_or_path": 2, "width": 640, "height": 480, "fps": 30},
118
+ top: {"type": "opencv", "index_or_path": 4, "width": 640, "height": 480, "fps": 30},
119
+ }' \
120
+ --display_data=true \
121
+ --dataset.repo_id=<user>/eval_groot-bimanual \
122
+ --dataset.num_episodes=10 \
123
+ --dataset.single_task="Grab and handover the red cube to the other arm"
124
+ --policy.path=<user>/groot-bimanual # your trained model
125
+ --dataset.episode_time_s=30
126
+ --dataset.reset_time_s=10
127
+ ```
128
+
129
+ ## License
130
+
131
+ This model follows the **Apache 2.0 License**, consistent with the original [GR00T repository](https://github.com/NVIDIA/Isaac-GR00T).
docs/source/hilserl.mdx ADDED
@@ -0,0 +1,923 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HIL-SERL Real Robot Training Workflow Guide
2
+
3
+ In this tutorial you will go through the full Human-in-the-Loop Sample-Efficient Reinforcement Learning (HIL-SERL) workflow using LeRobot. You will master training a policy with RL on a real robot in just a few hours.
4
+
5
+ HIL-SERL is a sample-efficient reinforcement learning algorithm that combines human demonstrations with online learning and human interventions. The approach starts from a small set of human demonstrations, uses them to train a reward classifier, and then employs an actor-learner architecture where humans can intervene during policy execution to guide exploration and correct unsafe behaviors. In this tutorial, you'll use a gamepad to provide interventions and control the robot during the learning process.
6
+
7
+ It combines three key ingredients:
8
+
9
+ 1. **Offline demonstrations & reward classifier:** a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point.
10
+
11
+ 2. **On-robot actor / learner loop with human interventions:** a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour.
12
+
13
+ 3. **Safety & efficiency tools:** joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
14
+
15
+ Together these elements let HIL-SERL reach near-perfect task success and faster cycle times than imitation-only baselines.
16
+
17
+ <p align="center">
18
+ <img
19
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hilserl-main-figure.png"
20
+ alt="HIL-SERL workflow"
21
+ title="HIL-SERL workflow"
22
+ width="100%"
23
+ ></img>
24
+ </p>
25
+
26
+ <p align="center">
27
+ <i>HIL-SERL workflow, Luo et al. 2024</i>
28
+ </p>
29
+
30
+ This guide provides step-by-step instructions for training a robot policy using LeRobot's HilSerl implementation to train on a real robot.
31
+
32
+ ## What do I need?
33
+
34
+ - A gamepad (recommended) or keyboard to control the robot
35
+ - A Nvidia GPU
36
+ - A real robot with a follower and leader arm (optional if you use the keyboard or the gamepad)
37
+ - A URDF file for the robot for the kinematics package (check `lerobot/model/kinematics.py`)
38
+
39
+ ## What kind of tasks can I train?
40
+
41
+ One can use HIL-SERL to train on a variety of manipulation tasks. Some recommendations:
42
+
43
+ - Start with a simple task to understand how the system works.
44
+ - Push cube to a goal region
45
+ - Pick and lift cube with the gripper
46
+ - Avoid extremely long horizon tasks. Focus on tasks that can be completed in 5-10 seconds.
47
+ - Once you have a good idea of how the system works, you can try more complex tasks and longer horizons.
48
+ - Pick and place cube
49
+ - Bimanual tasks to pick objects with two arms
50
+ - Hand-over tasks to transfer objects from one arm to another
51
+ - Go crazy!
52
+
53
+ ## Install LeRobot with HIL-SERL
54
+
55
+ To install LeRobot with HIL-SERL, you need to install the `hilserl` extra.
56
+
57
+ ```bash
58
+ pip install -e ".[hilserl]"
59
+ ```
60
+
61
+ ## Real Robot Training Workflow
62
+
63
+ ### Understanding Configuration
64
+
65
+ The training process begins with proper configuration for the HILSerl environment. The main configuration class is `GymManipulatorConfig` in `lerobot/rl/gym_manipulator.py`, which contains nested `HILSerlRobotEnvConfig` and `DatasetConfig`. The configuration is organized into focused, nested sub-configs:
66
+
67
+ <!-- prettier-ignore-start -->
68
+ ```python
69
+ class GymManipulatorConfig:
70
+ env: HILSerlRobotEnvConfig # Environment configuration (nested)
71
+ dataset: DatasetConfig # Dataset recording/replay configuration (nested)
72
+ mode: str | None = None # "record", "replay", or None (for training)
73
+ device: str = "cpu" # Compute device
74
+
75
+ class HILSerlRobotEnvConfig(EnvConfig):
76
+ robot: RobotConfig | None = None # Main robot agent (defined in `lerobot/robots`)
77
+ teleop: TeleoperatorConfig | None = None # Teleoperator agent, e.g., gamepad or leader arm
78
+ processor: HILSerlProcessorConfig # Processing pipeline configuration (nested)
79
+ name: str = "real_robot" # Environment name
80
+ task: str | None = None # Task identifier
81
+ fps: int = 10 # Control frequency
82
+
83
+ # Nested processor configuration
84
+ class HILSerlProcessorConfig:
85
+ control_mode: str = "gamepad" # Control mode
86
+ observation: ObservationConfig | None = None # Observation processing settings
87
+ image_preprocessing: ImagePreprocessingConfig | None = None # Image crop/resize settings
88
+ gripper: GripperConfig | None = None # Gripper control and penalty settings
89
+ reset: ResetConfig | None = None # Environment reset and timing settings
90
+ inverse_kinematics: InverseKinematicsConfig | None = None # IK processing settings
91
+ reward_classifier: RewardClassifierConfig | None = None # Reward classifier settings
92
+ max_gripper_pos: float | None = 100.0 # Maximum gripper position
93
+
94
+ # Sub-configuration classes
95
+ class ObservationConfig:
96
+ add_joint_velocity_to_observation: bool = False # Add joint velocities to state
97
+ add_current_to_observation: bool = False # Add motor currents to state
98
+ display_cameras: bool = False # Display camera feeds during execution
99
+
100
+ class ImagePreprocessingConfig:
101
+ crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None # Image cropping parameters
102
+ resize_size: tuple[int, int] | None = None # Target image size
103
+
104
+ class GripperConfig:
105
+ use_gripper: bool = True # Enable gripper control
106
+ gripper_penalty: float = 0.0 # Penalty for inappropriate gripper usage
107
+
108
+ class ResetConfig:
109
+ fixed_reset_joint_positions: Any | None = None # Joint positions for reset
110
+ reset_time_s: float = 5.0 # Time to wait during reset
111
+ control_time_s: float = 20.0 # Maximum episode duration
112
+ terminate_on_success: bool = True # Whether to terminate episodes on success detection
113
+
114
+ class InverseKinematicsConfig:
115
+ urdf_path: str | None = None # Path to robot URDF file
116
+ target_frame_name: str | None = None # End-effector frame name
117
+ end_effector_bounds: dict[str, list[float]] | None = None # EE workspace bounds
118
+ end_effector_step_sizes: dict[str, float] | None = None # EE step sizes per axis
119
+
120
+ class RewardClassifierConfig:
121
+ pretrained_path: str | None = None # Path to pretrained reward classifier
122
+ success_threshold: float = 0.5 # Success detection threshold
123
+ success_reward: float = 1.0 # Reward value for successful episodes
124
+
125
+ # Dataset configuration
126
+ class DatasetConfig:
127
+ repo_id: str # LeRobot dataset repository ID
128
+ task: str # Task identifier
129
+ root: str | None = None # Local dataset root directory
130
+ num_episodes_to_record: int = 5 # Number of episodes for recording
131
+ replay_episode: int | None = None # Episode index for replay
132
+ push_to_hub: bool = False # Whether to push datasets to Hub
133
+ ```
134
+ <!-- prettier-ignore-end -->
135
+
136
+ ### Processor Pipeline Architecture
137
+
138
+ HIL-SERL uses a modular processor pipeline architecture that processes robot observations and actions through a series of composable steps. The pipeline is divided into two main components:
139
+
140
+ #### Environment Processor Pipeline
141
+
142
+ The environment processor (`env_processor`) handles incoming observations and environment state:
143
+
144
+ 1. **VanillaObservationProcessorStep**: Converts raw robot observations into standardized format
145
+ 2. **JointVelocityProcessorStep** (optional): Adds joint velocity information to observations
146
+ 3. **MotorCurrentProcessorStep** (optional): Adds motor current readings to observations
147
+ 4. **ForwardKinematicsJointsToEE** (optional): Computes end-effector pose from joint positions
148
+ 5. **ImageCropResizeProcessorStep** (optional): Crops and resizes camera images
149
+ 6. **TimeLimitProcessorStep** (optional): Enforces episode time limits
150
+ 7. **GripperPenaltyProcessorStep** (optional): Applies penalties for inappropriate gripper usage
151
+ 8. **RewardClassifierProcessorStep** (optional): Automated reward detection using vision models
152
+ 9. **AddBatchDimensionProcessorStep**: Converts data to batch format for neural network processing
153
+ 10. **DeviceProcessorStep**: Moves data to the specified compute device (CPU/GPU)
154
+
155
+ #### Action Processor Pipeline
156
+
157
+ The action processor (`action_processor`) handles outgoing actions and human interventions:
158
+
159
+ 1. **AddTeleopActionAsComplimentaryDataStep**: Captures teleoperator actions for logging
160
+ 2. **AddTeleopEventsAsInfoStep**: Records intervention events and episode control signals
161
+ 3. **InterventionActionProcessorStep**: Handles human interventions and episode termination
162
+ 4. **Inverse Kinematics Pipeline** (when enabled):
163
+ - **MapDeltaActionToRobotActionStep**: Converts delta actions to robot action format
164
+ - **EEReferenceAndDelta**: Computes end-effector reference and delta movements
165
+ - **EEBoundsAndSafety**: Enforces workspace safety bounds
166
+ - **InverseKinematicsEEToJoints**: Converts end-effector actions to joint targets
167
+ - **GripperVelocityToJoint**: Handles gripper control commands
168
+
169
+ #### Configuration Examples
170
+
171
+ **Basic Observation Processing**:
172
+
173
+ ```json
174
+ {
175
+ "env": {
176
+ "processor": {
177
+ "observation": {
178
+ "add_joint_velocity_to_observation": true,
179
+ "add_current_to_observation": false,
180
+ "display_cameras": false
181
+ }
182
+ }
183
+ }
184
+ }
185
+ ```
186
+
187
+ **Image Processing**:
188
+
189
+ ```json
190
+ {
191
+ "env": {
192
+ "processor": {
193
+ "image_preprocessing": {
194
+ "crop_params_dict": {
195
+ "observation.images.front": [180, 250, 120, 150],
196
+ "observation.images.side": [180, 207, 180, 200]
197
+ },
198
+ "resize_size": [128, 128]
199
+ }
200
+ }
201
+ }
202
+ }
203
+ ```
204
+
205
+ **Inverse Kinematics Setup**:
206
+
207
+ ```json
208
+ {
209
+ "env": {
210
+ "processor": {
211
+ "inverse_kinematics": {
212
+ "urdf_path": "path/to/robot.urdf",
213
+ "target_frame_name": "end_effector",
214
+ "end_effector_bounds": {
215
+ "min": [0.16, -0.08, 0.03],
216
+ "max": [0.24, 0.2, 0.1]
217
+ },
218
+ "end_effector_step_sizes": {
219
+ "x": 0.02,
220
+ "y": 0.02,
221
+ "z": 0.02
222
+ }
223
+ }
224
+ }
225
+ }
226
+ }
227
+ ```
228
+
229
+ ### Advanced Observation Processing
230
+
231
+ The HIL-SERL framework supports additional observation processing features that can improve policy learning:
232
+
233
+ #### Joint Velocity Processing
234
+
235
+ Enable joint velocity estimation to provide the policy with motion information:
236
+
237
+ ```json
238
+ {
239
+ "env": {
240
+ "processor": {
241
+ "observation": {
242
+ "add_joint_velocity_to_observation": true
243
+ }
244
+ }
245
+ }
246
+ }
247
+ ```
248
+
249
+ This processor:
250
+
251
+ - Estimates joint velocities using finite differences between consecutive joint position readings
252
+ - Adds velocity information to the observation state vector
253
+ - Useful for policies that need motion awareness for dynamic tasks
254
+
255
+ #### Motor Current Processing
256
+
257
+ Monitor motor currents to detect contact forces and load conditions:
258
+
259
+ ```json
260
+ {
261
+ "env": {
262
+ "processor": {
263
+ "observation": {
264
+ "add_current_to_observation": true
265
+ }
266
+ }
267
+ }
268
+ }
269
+ ```
270
+
271
+ This processor:
272
+
273
+ - Reads motor current values from the robot's control system
274
+ - Adds current measurements to the observation state vector
275
+ - Helps detect contact events, object weights, and mechanical resistance
276
+ - Useful for contact-rich manipulation tasks
277
+
278
+ #### Combined Observation Processing
279
+
280
+ You can enable multiple observation processing features simultaneously:
281
+
282
+ ```json
283
+ {
284
+ "env": {
285
+ "processor": {
286
+ "observation": {
287
+ "add_joint_velocity_to_observation": true,
288
+ "add_current_to_observation": true,
289
+ "display_cameras": false
290
+ }
291
+ }
292
+ }
293
+ }
294
+ ```
295
+
296
+ **Note**: Enabling additional observation features increases the state space dimensionality, which may require adjusting your policy network architecture and potentially collecting more training data.
297
+
298
+ ### Finding Robot Workspace Bounds
299
+
300
+ Before collecting demonstrations, you need to determine the appropriate operational bounds for your robot.
301
+
302
+ This helps simplify the problem of learning on the real robot in two ways: 1) by limiting the robot's operational space to a specific region that solves the task and avoids unnecessary or unsafe exploration, and 2) by allowing training in end-effector space rather than joint space. Empirically, learning in joint space for reinforcement learning in manipulation is often a harder problem - some tasks are nearly impossible to learn in joint space but become learnable when the action space is transformed to end-effector coordinates.
303
+
304
+ **Using lerobot-find-joint-limits**
305
+
306
+ This script helps you find the safe operational bounds for your robot's end-effector. Given that you have a follower and leader arm, you can use the script to find the bounds for the follower arm that will be applied during training.
307
+ Bounding the action space will reduce the redundant exploration of the agent and guarantees safety.
308
+
309
+ ```bash
310
+ lerobot-find-joint-limits \
311
+ --robot.type=so100_follower \
312
+ --robot.port=/dev/tty.usbmodem58760431541 \
313
+ --robot.id=black \
314
+ --teleop.type=so100_leader \
315
+ --teleop.port=/dev/tty.usbmodem58760431551 \
316
+ --teleop.id=blue
317
+ ```
318
+
319
+ **Workflow**
320
+
321
+ 1. Run the script and move the robot through the space that solves the task
322
+ 2. The script will record the minimum and maximum end-effector positions and the joint angles and prints them to the console, for example:
323
+ ```
324
+ Max ee position [0.2417 0.2012 0.1027]
325
+ Min ee position [0.1663 -0.0823 0.0336]
326
+ Max joint positions [-20.0, -20.0, -20.0, -20.0, -20.0, -20.0]
327
+ Min joint positions [50.0, 50.0, 50.0, 50.0, 50.0, 50.0]
328
+ ```
329
+ 3. Use these values in the configuration of your teleoperation device (TeleoperatorConfig) under the `end_effector_bounds` field
330
+
331
+ **Example Configuration**
332
+
333
+ ```json
334
+ "end_effector_bounds": {
335
+ "max": [0.24, 0.20, 0.10],
336
+ "min": [0.16, -0.08, 0.03]
337
+ }
338
+ ```
339
+
340
+ ### Collecting Demonstrations
341
+
342
+ With the bounds defined, you can safely collect demonstrations for training. Training RL with off-policy algorithm allows us to use offline datasets collected in order to improve the efficiency of the learning process.
343
+
344
+ **Setting Up Record Mode**
345
+
346
+ Create a configuration file for recording demonstrations (or edit an existing one like [env_config.json](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/env_config.json)):
347
+
348
+ 1. Set `mode` to `"record"` at the root level
349
+ 2. Specify a unique `repo_id` for your dataset in the `dataset` section (e.g., "username/task_name")
350
+ 3. Set `num_episodes_to_record` in the `dataset` section to the number of demonstrations you want to collect
351
+ 4. Set `env.processor.image_preprocessing.crop_params_dict` to `{}` initially (we'll determine crops later)
352
+ 5. Configure `env.robot`, `env.teleop`, and other hardware settings in the `env` section
353
+
354
+ Example configuration section:
355
+
356
+ ```json
357
+ {
358
+ "env": {
359
+ "type": "gym_manipulator",
360
+ "name": "real_robot",
361
+ "fps": 10,
362
+ "processor": {
363
+ "control_mode": "gamepad",
364
+ "observation": {
365
+ "display_cameras": false
366
+ },
367
+ "image_preprocessing": {
368
+ "crop_params_dict": {},
369
+ "resize_size": [128, 128]
370
+ },
371
+ "gripper": {
372
+ "use_gripper": true,
373
+ "gripper_penalty": 0.0
374
+ },
375
+ "reset": {
376
+ "reset_time_s": 5.0,
377
+ "control_time_s": 20.0
378
+ }
379
+ },
380
+ "robot": {
381
+ // ... robot configuration ...
382
+ },
383
+ "teleop": {
384
+ // ... teleoperator configuration ...
385
+ }
386
+ },
387
+ "dataset": {
388
+ "repo_id": "username/pick_lift_cube",
389
+ "root": null,
390
+ "task": "pick_and_lift",
391
+ "num_episodes_to_record": 15,
392
+ "replay_episode": 0,
393
+ "push_to_hub": true
394
+ },
395
+ "mode": "record",
396
+ "device": "cpu"
397
+ }
398
+ ```
399
+
400
+ ### Using a Teleoperation Device
401
+
402
+ Along with your robot, you will need a teleoperation device to control it in order to collect datasets of your task and perform interventions during the online training.
403
+ We support using a gamepad or a keyboard or the leader arm of the robot.
404
+
405
+ HIL-Serl learns actions in the end-effector space of the robot. Therefore, the teleoperation will control the end-effector's x,y,z displacements.
406
+
407
+ For that we need to define a version of the robot that takes actions in the end-effector space. Check the robot class `SO100FollowerEndEffector` and its configuration `SO100FollowerEndEffectorConfig` for the default parameters related to the end-effector space.
408
+
409
+ <!-- prettier-ignore-start -->
410
+ ```python
411
+ class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
412
+ """Configuration for the SO100FollowerEndEffector robot."""
413
+
414
+ # Default bounds for the end-effector position (in meters)
415
+ end_effector_bounds: dict[str, list[float]] = field( # bounds for the end-effector in x,y,z direction
416
+ default_factory=lambda: {
417
+ "min": [-1.0, -1.0, -1.0], # min x, y, z
418
+ "max": [1.0, 1.0, 1.0], # max x, y, z
419
+ }
420
+ )
421
+
422
+ max_gripper_pos: float = 50 # maximum gripper position that the gripper will be open at
423
+
424
+ end_effector_step_sizes: dict[str, float] = field( # maximum step size for the end-effector in x,y,z direction
425
+ default_factory=lambda: {
426
+ "x": 0.02,
427
+ "y": 0.02,
428
+ "z": 0.02,
429
+ }
430
+ )
431
+ ```
432
+ <!-- prettier-ignore-end -->
433
+
434
+ The `Teleoperator` defines the teleoperation device. You can check the list of available teleoperators in `lerobot/teleoperators`.
435
+
436
+ **Setting up the Gamepad**
437
+
438
+ The gamepad provides a very convenient way to control the robot and the episode state.
439
+
440
+ To setup the gamepad, you need to set the `control_mode` to `"gamepad"` and define the `teleop` section in the configuration file.
441
+
442
+ ```json
443
+ {
444
+ "env": {
445
+ "teleop": {
446
+ "type": "gamepad",
447
+ "use_gripper": true
448
+ },
449
+ "processor": {
450
+ "control_mode": "gamepad",
451
+ "gripper": {
452
+ "use_gripper": true
453
+ }
454
+ }
455
+ }
456
+ }
457
+ ```
458
+
459
+ <p align="center">
460
+ <img
461
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true"
462
+ alt="Figure shows the control mappings on a Logitech gamepad."
463
+ title="Gamepad Control Mapping"
464
+ width="100%"
465
+ ></img>
466
+ </p>
467
+ <p align="center">
468
+ <i>Gamepad button mapping for robot control and episode management</i>
469
+ </p>
470
+
471
+ **Setting up the SO101 leader**
472
+
473
+ The SO101 leader arm has reduced gears that allows it to move and track the follower arm during exploration. Therefore, taking over is much smoother than the gearless SO100.
474
+
475
+ To setup the SO101 leader, you need to set the `control_mode` to `"leader"` and define the `teleop` section in the configuration file.
476
+
477
+ ```json
478
+ {
479
+ "env": {
480
+ "teleop": {
481
+ "type": "so101_leader",
482
+ "port": "/dev/tty.usbmodem585A0077921",
483
+ "use_degrees": true
484
+ },
485
+ "processor": {
486
+ "control_mode": "leader",
487
+ "gripper": {
488
+ "use_gripper": true
489
+ }
490
+ }
491
+ }
492
+ }
493
+ ```
494
+
495
+ In order to annotate the success/failure of the episode, **you will need** to use a keyboard to press `s` for success, `esc` for failure.
496
+ During the online training, press `space` to take over the policy and `space` again to give the control back to the policy.
497
+
498
+ <details>
499
+ <summary><strong>Video: SO101 leader teleoperation</strong></summary>
500
+
501
+ <div class="video-container">
502
+ <video controls width="600">
503
+ <source
504
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so101_leader_tutorial.mp4"
505
+ type="video/mp4"
506
+ />
507
+ </video>
508
+ </div>
509
+
510
+ <p align="center"><i>SO101 leader teleoperation example, the leader tracks the follower, press `space` to intervene</i></p>
511
+ </details>
512
+
513
+ **Recording Demonstrations**
514
+
515
+ Start the recording process, an example of the config file can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/env_config_so100.json):
516
+
517
+ ```bash
518
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
519
+ ```
520
+
521
+ During recording:
522
+
523
+ 1. The robot will reset to the initial position defined in the configuration file `env.processor.reset.fixed_reset_joint_positions`
524
+ 2. Complete the task successfully
525
+ 3. The episode ends with a reward of 1 when you press the "success" button
526
+ 4. If the time limit is reached, or the fail button is pressed, the episode ends with a reward of 0
527
+ 5. You can rerecord an episode by pressing the "rerecord" button
528
+ 6. The process automatically continues to the next episode
529
+ 7. After recording all episodes, the dataset is pushed to the Hugging Face Hub (optional) and saved locally
530
+
531
+ ### Processing the Dataset
532
+
533
+ After collecting demonstrations, process them to determine optimal camera crops.
534
+ Reinforcement learning is sensitive to background distractions, so it is important to crop the images to the relevant workspace area.
535
+
536
+ Visual RL algorithms learn directly from pixel inputs, making them vulnerable to irrelevant visual information. Background elements like changing lighting, shadows, people moving, or objects outside the workspace can confuse the learning process. Good ROI selection should:
537
+
538
+ - Include only the essential workspace where the task happens
539
+ - Capture the robot's end-effector and all objects involved in the task
540
+ - Exclude unnecessary background elements and distractions
541
+
542
+ Note: If you already know the crop parameters, you can skip this step and just set the `crop_params_dict` in the configuration file during recording.
543
+
544
+ **Determining Crop Parameters**
545
+
546
+ Use the `crop_dataset_roi.py` script to interactively select regions of interest in your camera images:
547
+
548
+ ```bash
549
+ python -m lerobot.rl.crop_dataset_roi --repo-id username/pick_lift_cube
550
+ ```
551
+
552
+ 1. For each camera view, the script will display the first frame
553
+ 2. Draw a rectangle around the relevant workspace area
554
+ 3. Press 'c' to confirm the selection
555
+ 4. Repeat for all camera views
556
+ 5. The script outputs cropping parameters and creates a new cropped dataset
557
+
558
+ Example output:
559
+
560
+ ```
561
+ Selected Rectangular Regions of Interest (top, left, height, width):
562
+ observation.images.side: [180, 207, 180, 200]
563
+ observation.images.front: [180, 250, 120, 150]
564
+ ```
565
+
566
+ <p align="center">
567
+ <img
568
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/crop_dataset.gif"
569
+ width="600"
570
+ />
571
+ </p>
572
+
573
+ <p align="center">
574
+ <i>Interactive cropping tool for selecting regions of interest</i>
575
+ </p>
576
+
577
+ **Updating Configuration**
578
+
579
+ Add these crop parameters to your training configuration:
580
+
581
+ ```json
582
+ {
583
+ "env": {
584
+ "processor": {
585
+ "image_preprocessing": {
586
+ "crop_params_dict": {
587
+ "observation.images.side": [180, 207, 180, 200],
588
+ "observation.images.front": [180, 250, 120, 150]
589
+ },
590
+ "resize_size": [128, 128]
591
+ }
592
+ }
593
+ }
594
+ }
595
+ ```
596
+
597
+ **Recommended image resolution**
598
+
599
+ Most vision-based policies have been validated on square inputs of either **128×128** (default) or **64×64** pixels. We therefore advise setting the resize_size parameter to [128, 128] – or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
600
+
601
+ ### Training a Reward Classifier
602
+
603
+ The reward classifier plays an important role in the HIL-SERL workflow by automating reward assignment and automatically detecting episode success. Instead of manually defining reward functions or relying on human feedback for every timestep, the reward classifier learns to predict success/failure from visual observations. This enables the RL algorithm to learn efficiently by providing consistent and automated reward signals based on the robot's camera inputs.
604
+
605
+ This guide explains how to train a reward classifier for human-in-the-loop reinforcement learning implementation of LeRobot. Reward classifiers learn to predict the reward value given a state which can be used in an RL setup to train a policy.
606
+
607
+ **Note**: Training a reward classifier is optional. You can start the first round of RL experiments by annotating the success manually with your gamepad or keyboard device.
608
+
609
+ The reward classifier implementation in `modeling_classifier.py` uses a pretrained vision model to process the images. It can output either a single value for binary rewards to predict success/fail cases or multiple values for multi-class settings.
610
+
611
+ **Collecting a Dataset for the reward classifier**
612
+
613
+ Before training, you need to collect a dataset with labeled examples. The `record_dataset` function in `gym_manipulator.py` enables the process of collecting a dataset of observations, actions, and rewards.
614
+
615
+ To collect a dataset, you need to modify some parameters in the environment configuration based on HILSerlRobotEnvConfig.
616
+
617
+ ```bash
618
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
619
+ ```
620
+
621
+ **Key Parameters for Data Collection**
622
+
623
+ - **mode**: set it to `"record"` to collect a dataset (at root level)
624
+ - **dataset.repo_id**: `"hf_username/dataset_name"`, name of the dataset and repo on the hub
625
+ - **dataset.num_episodes_to_record**: Number of episodes to record
626
+ - **env.processor.reset.terminate_on_success**: Whether to automatically terminate episodes when success is detected (default: `true`)
627
+ - **env.fps**: Number of frames per second to record
628
+ - **dataset.push_to_hub**: Whether to push the dataset to the hub
629
+
630
+ The `env.processor.reset.terminate_on_success` parameter allows you to control episode termination behavior. When set to `false`, episodes will continue even after success is detected, allowing you to collect more positive examples with the reward=1 label. This is crucial for training reward classifiers as it provides more success state examples in your dataset. When set to `true` (default), episodes terminate immediately upon success detection.
631
+
632
+ **Important**: For reward classifier training, set `terminate_on_success: false` to collect sufficient positive examples. For regular HIL-SERL training, keep it as `true` to enable automatic episode termination when the task is completed successfully.
633
+
634
+ Example configuration section for data collection:
635
+
636
+ ```json
637
+ {
638
+ "env": {
639
+ "type": "gym_manipulator",
640
+ "name": "real_robot",
641
+ "fps": 10,
642
+ "processor": {
643
+ "reset": {
644
+ "reset_time_s": 5.0,
645
+ "control_time_s": 20.0,
646
+ "terminate_on_success": false
647
+ },
648
+ "gripper": {
649
+ "use_gripper": true
650
+ }
651
+ },
652
+ "robot": {
653
+ // ... robot configuration ...
654
+ },
655
+ "teleop": {
656
+ // ... teleoperator configuration ...
657
+ }
658
+ },
659
+ "dataset": {
660
+ "repo_id": "hf_username/dataset_name",
661
+ "dataset_root": "data/your_dataset",
662
+ "task": "reward_classifier_task",
663
+ "num_episodes_to_record": 20,
664
+ "replay_episode": null,
665
+ "push_to_hub": true
666
+ },
667
+ "mode": "record",
668
+ "device": "cpu"
669
+ }
670
+ ```
671
+
672
+ **Reward Classifier Configuration**
673
+
674
+ The reward classifier is configured using `configuration_classifier.py`. Here are the key parameters:
675
+
676
+ - **model_name**: Base model architecture (e.g., we mainly use `"helper2424/resnet10"`)
677
+ - **model_type**: `"cnn"` or `"transformer"`
678
+ - **num_cameras**: Number of camera inputs
679
+ - **num_classes**: Number of output classes (typically 2 for binary success/failure)
680
+ - **hidden_dim**: Size of hidden representation
681
+ - **dropout_rate**: Regularization parameter
682
+ - **learning_rate**: Learning rate for optimizer
683
+
684
+ Example configuration for training the [reward classifier](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/reward_classifier_train_config.json):
685
+
686
+ ```json
687
+ {
688
+ "policy": {
689
+ "type": "reward_classifier",
690
+ "model_name": "helper2424/resnet10",
691
+ "model_type": "cnn",
692
+ "num_cameras": 2,
693
+ "num_classes": 2,
694
+ "hidden_dim": 256,
695
+ "dropout_rate": 0.1,
696
+ "learning_rate": 1e-4,
697
+ "device": "cuda",
698
+ "use_amp": true,
699
+ "input_features": {
700
+ "observation.images.front": {
701
+ "type": "VISUAL",
702
+ "shape": [3, 128, 128]
703
+ },
704
+ "observation.images.side": {
705
+ "type": "VISUAL",
706
+ "shape": [3, 128, 128]
707
+ }
708
+ }
709
+ }
710
+ }
711
+ ```
712
+
713
+ **Training the Classifier**
714
+
715
+ To train the classifier, use the `train.py` script with your configuration:
716
+
717
+ ```bash
718
+ lerobot-train --config_path path/to/reward_classifier_train_config.json
719
+ ```
720
+
721
+ **Deploying and Testing the Model**
722
+
723
+ To use your trained reward classifier, configure the `HILSerlRobotEnvConfig` to use your model:
724
+
725
+ <!-- prettier-ignore-start -->
726
+ ```python
727
+ config = GymManipulatorConfig(
728
+ env=HILSerlRobotEnvConfig(
729
+ processor=HILSerlProcessorConfig(
730
+ reward_classifier=RewardClassifierConfig(
731
+ pretrained_path="path_to_your_pretrained_trained_model"
732
+ )
733
+ ),
734
+ # Other environment parameters
735
+ ),
736
+ dataset=DatasetConfig(...),
737
+ mode=None # For training
738
+ )
739
+ ```
740
+ <!-- prettier-ignore-end -->
741
+
742
+ or set the argument in the json config file.
743
+
744
+ ```json
745
+ {
746
+ "env": {
747
+ "processor": {
748
+ "reward_classifier": {
749
+ "pretrained_path": "path_to_your_pretrained_model",
750
+ "success_threshold": 0.7,
751
+ "success_reward": 1.0
752
+ },
753
+ "reset": {
754
+ "terminate_on_success": true
755
+ }
756
+ }
757
+ }
758
+ }
759
+ ```
760
+
761
+ Run `gym_manipulator.py` to test the model.
762
+
763
+ ```bash
764
+ python -m lerobot.rl.gym_manipulator --config_path path/to/env_config.json
765
+ ```
766
+
767
+ The reward classifier will automatically provide rewards based on the visual input from the robot's cameras.
768
+
769
+ **Example Workflow for training the reward classifier**
770
+
771
+ 1. **Create the configuration files**:
772
+ Create the necessary json configuration files for the reward classifier and the environment. Check the examples [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/reward_classifier/config.json).
773
+
774
+ 2. **Collect a dataset**:
775
+
776
+ ```bash
777
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
778
+ ```
779
+
780
+ 3. **Train the classifier**:
781
+
782
+ ```bash
783
+ lerobot-train --config_path src/lerobot/configs/reward_classifier_train_config.json
784
+ ```
785
+
786
+ 4. **Test the classifier**:
787
+ ```bash
788
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
789
+ ```
790
+
791
+ ### Training with Actor-Learner
792
+
793
+ The LeRobot system uses a distributed actor-learner architecture for training. This architecture decouples robot interactions from the learning process, allowing them to run concurrently without blocking each other. The actor server handles robot observations and actions, sending interaction data to the learner server. The learner server performs gradient descent and periodically updates the actor's policy weights. You will need to start two processes: a learner and an actor.
794
+
795
+ **Configuration Setup**
796
+
797
+ Create a training configuration file (example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/train_config.json)). The training config is based on the main `TrainRLServerPipelineConfig` class in `lerobot/configs/train.py`.
798
+
799
+ 1. Configure the policy settings (`type="sac"`, `device`, etc.)
800
+ 2. Set `dataset` to your cropped dataset
801
+ 3. Configure environment settings with crop parameters
802
+ 4. Check the other parameters related to SAC in [configuration_sac.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/sac/configuration_sac.py#L79).
803
+ 5. Verify that the `policy` config is correct with the right `input_features` and `output_features` for your task.
804
+
805
+ **Starting the Learner**
806
+
807
+ First, start the learner server process:
808
+
809
+ ```bash
810
+ python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
811
+ ```
812
+
813
+ The learner:
814
+
815
+ - Initializes the policy network
816
+ - Prepares replay buffers
817
+ - Opens a `gRPC` server to communicate with actors
818
+ - Processes transitions and updates the policy
819
+
820
+ **Starting the Actor**
821
+
822
+ In a separate terminal, start the actor process with the same configuration:
823
+
824
+ ```bash
825
+ python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
826
+ ```
827
+
828
+ The actor:
829
+
830
+ - Connects to the learner via `gRPC`
831
+ - Initializes the environment
832
+ - Execute rollouts of the policy to collect experience
833
+ - Sends transitions to the learner
834
+ - Receives updated policy parameters
835
+
836
+ **Training Flow**
837
+
838
+ The training proceeds automatically:
839
+
840
+ 1. The actor executes the policy in the environment
841
+ 2. Transitions are collected and sent to the learner
842
+ 3. The learner updates the policy based on these transitions
843
+ 4. Updated policy parameters are sent back to the actor
844
+ 5. The process continues until the specified step limit is reached
845
+
846
+ **Human in the Loop**
847
+
848
+ - The key to learning efficiently is to have human interventions to provide corrective feedback and completing the task to aide the policy learning and exploration.
849
+ - To perform human interventions, you can press the upper right trigger button on the gamepad (or the `space` key on the keyboard). This will pause the policy actions and allow you to take over.
850
+ - A successful experiment is one where the human has to intervene at the start but then reduces the amount of interventions as the policy improves. You can monitor the intervention rate in the `wandb` dashboard.
851
+
852
+ <p align="center">
853
+ <img
854
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hil_effect.png?raw=true"
855
+ alt="Figure shows the control mappings on a Logitech gamepad."
856
+ title="Gamepad Control Mapping"
857
+ width="100%"
858
+ ></img>
859
+ </p>
860
+
861
+ <p align="center">
862
+ <i>
863
+ Example showing how human interventions help guide policy learning over time
864
+ </i>
865
+ </p>
866
+
867
+ - The figure shows the plot of the episodic reward over interaction step. The figure shows the effect of human interventions on the policy learning.
868
+ - The orange curve is an experiment without any human interventions. While the pink and blue curves are experiments with human interventions.
869
+ - We can observe that the number of steps where the policy starts achieving the maximum reward is cut by a quarter when human interventions are present.
870
+
871
+ **Monitoring and Debugging**
872
+
873
+ If you have `wandb.enable` set to `true` in your configuration, you can monitor training progress in real-time through the [Weights & Biases](https://wandb.ai/site/) dashboard.
874
+
875
+ ### Guide to Human Interventions
876
+
877
+ The learning process is very sensitive to the intervention strategy. It will takes a few runs to understand how to intervene effectively. Some tips and hints:
878
+
879
+ - Allow the policy to explore for a few episodes at the start of training.
880
+ - Avoid intervening for long periods of time. Try to intervene in situation to correct the robot's behaviour when it goes off track.
881
+ - Once the policy starts achieving the task, even if its not perfect, you can limit your interventions to simple quick actions like a simple grasping commands.
882
+
883
+ The ideal behaviour is that your intervention rate should drop gradually during training as shown in the figure below.
884
+
885
+ <p align="center">
886
+ <img
887
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/intervention_rate_tutorial_rl.png?raw=true"
888
+ alt="Intervention rate"
889
+ title="Intervention rate during training"
890
+ width="100%"
891
+ ></img>
892
+ </p>
893
+
894
+ <p align="center">
895
+ <i>
896
+ Plot of the intervention rate during a training run on a pick and lift cube
897
+ task
898
+ </i>
899
+ </p>
900
+
901
+ ### Key hyperparameters to tune
902
+
903
+ Some configuration values have a disproportionate impact on training stability and speed:
904
+
905
+ - **`temperature_init`** (`policy.temperature_init`) – initial entropy temperature in SAC. Higher values encourage more exploration; lower values make the policy more deterministic early on. A good starting point is `1e-2`. We observed that setting it too high can make human interventions ineffective and slow down learning.
906
+ - **`policy_parameters_push_frequency`** (`policy.actor_learner_config.policy_parameters_push_frequency`) – interval in _seconds_ between two weight pushes from the learner to the actor. The default is `4 s`. Decrease to **1-2 s** to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.
907
+ - **`storage_device`** (`policy.storage_device`) – device on which the learner keeps the policy parameters. If you have spare GPU memory, set this to `"cuda"` (instead of the default `"cpu"`). Keeping the weights on-GPU removes CPU→GPU transfer overhead and can significantly increase the number of learner updates per second.
908
+
909
+ Congrats 🎉, you have finished this tutorial!
910
+
911
+ > [!TIP]
912
+ > If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
913
+
914
+ Paper citation:
915
+
916
+ ```
917
+ @article{luo2024precise,
918
+ title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
919
+ author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
920
+ journal={arXiv preprint arXiv:2410.21845},
921
+ year={2024}
922
+ }
923
+ ```