Spaces:
Running
Running
openhands
openhands
commited on
Commit
·
085a012
0
Parent(s):
Initial OpenHands Index leaderboard based on ASTA Bench
Browse files- Updated branding (title, README, logo)
- Changed dataset references to OpenHands Index datasets
- Modified agenteval.json to include 6 OpenHands datasets:
* swe-bench
* multi-swe-bench
* swe-bench-multimodal
* swt-bench
* commit0
* gaia
- Added mock data for initial testing
Co-authored-by: openhands <openhands@all-hands.dev>
This view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +1 -0
- .github/workflows/integration-tests.yml +44 -0
- .gitignore +184 -0
- Dockerfile +44 -0
- README.md +60 -0
- about.py +144 -0
- aliases.py +23 -0
- app.py +282 -0
- assets/api-custom.svg +3 -0
- assets/api-equivalent.svg +3 -0
- assets/api-legend.svg +3 -0
- assets/api-standard.svg +3 -0
- assets/c-custom.svg +3 -0
- assets/c-equivalent.svg +3 -0
- assets/c-legend.svg +3 -0
- assets/c-standard.svg +3 -0
- assets/code-execution.svg +265 -0
- assets/custom-legend.svg +3 -0
- assets/data-analysis.svg +265 -0
- assets/ellipse-coral.svg +3 -0
- assets/ellipse-pink.svg +3 -0
- assets/ellipse-white.svg +3 -0
- assets/ellipse-yellow.svg +3 -0
- assets/end-to-end-discovery.svg +265 -0
- assets/equivalent-legend.svg +3 -0
- assets/favicon/favicon.ico +0 -0
- assets/five-point-star.svg +3 -0
- assets/four-point-star.svg +3 -0
- assets/just-icon.svg +3 -0
- assets/literature-understanding.svg +265 -0
- assets/logo.svg +12 -0
- assets/openhands-logo.svg +1 -0
- assets/os-custom.svg +3 -0
- assets/os-equivalent.svg +3 -0
- assets/os-legend.svg +3 -0
- assets/os-ow-custom.svg +3 -0
- assets/os-ow-equivalent.svg +3 -0
- assets/os-ow-legend.svg +3 -0
- assets/os-ow-standard.svg +3 -0
- assets/os-standard.svg +3 -0
- assets/overall.svg +261 -0
- assets/pareto.svg +3 -0
- assets/standard-legend.svg +3 -0
- assets/three-point-star.svg +3 -0
- assets/trophy.svg +3 -0
- assets/up-arrow.svg +3 -0
- c_and_e.py +9 -0
- category_page_builder.py +105 -0
- config.py +22 -0
- content.py +934 -0
.gitattributes
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
|
.github/workflows/integration-tests.yml
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Integration Tests
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
pull_request:
|
| 5 |
+
branches: [ main ]
|
| 6 |
+
|
| 7 |
+
jobs:
|
| 8 |
+
integration-test:
|
| 9 |
+
runs-on: ubuntu-latest
|
| 10 |
+
|
| 11 |
+
environment:
|
| 12 |
+
name: testing
|
| 13 |
+
|
| 14 |
+
steps:
|
| 15 |
+
- uses: actions/checkout@v4
|
| 16 |
+
with:
|
| 17 |
+
lfs: true
|
| 18 |
+
|
| 19 |
+
- name: Set up Python 3.11
|
| 20 |
+
uses: actions/setup-python@v4
|
| 21 |
+
with:
|
| 22 |
+
python-version: '3.11'
|
| 23 |
+
|
| 24 |
+
- name: Cache pip dependencies
|
| 25 |
+
uses: actions/cache@v3
|
| 26 |
+
with:
|
| 27 |
+
path: ~/.cache/pip
|
| 28 |
+
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
|
| 29 |
+
restore-keys: |
|
| 30 |
+
${{ runner.os }}-pip-
|
| 31 |
+
|
| 32 |
+
- name: Install dependencies
|
| 33 |
+
run: |
|
| 34 |
+
python -m pip install --upgrade pip
|
| 35 |
+
pip install -r requirements.txt
|
| 36 |
+
pip install -r requirements-dev.txt
|
| 37 |
+
|
| 38 |
+
- name: Run integration tests
|
| 39 |
+
run: |
|
| 40 |
+
pytest tests/integration/ -v --tb=short
|
| 41 |
+
env:
|
| 42 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 43 |
+
HF_CONFIG: continuous-integration
|
| 44 |
+
IS_INTERNAL: true
|
.gitignore
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Byte-compiled / optimized / DLL files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
|
| 6 |
+
# C extensions
|
| 7 |
+
*.so
|
| 8 |
+
|
| 9 |
+
# Distribution / packaging
|
| 10 |
+
.Python
|
| 11 |
+
build/
|
| 12 |
+
develop-eggs/
|
| 13 |
+
dist/
|
| 14 |
+
downloads/
|
| 15 |
+
eggs/
|
| 16 |
+
.eggs/
|
| 17 |
+
lib/
|
| 18 |
+
lib64/
|
| 19 |
+
parts/
|
| 20 |
+
sdist/
|
| 21 |
+
var/
|
| 22 |
+
wheels/
|
| 23 |
+
share/python-wheels/
|
| 24 |
+
*.egg-info/
|
| 25 |
+
.installed.cfg
|
| 26 |
+
*.egg
|
| 27 |
+
MANIFEST
|
| 28 |
+
|
| 29 |
+
# PyInstaller
|
| 30 |
+
# Usually these files are written by a python script from a template
|
| 31 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
| 32 |
+
*.manifest
|
| 33 |
+
*.spec
|
| 34 |
+
|
| 35 |
+
# Installer logs
|
| 36 |
+
pip-log.txt
|
| 37 |
+
pip-delete-this-directory.txt
|
| 38 |
+
|
| 39 |
+
# Unit test / coverage reports
|
| 40 |
+
htmlcov/
|
| 41 |
+
.tox/
|
| 42 |
+
.nox/
|
| 43 |
+
.coverage
|
| 44 |
+
.coverage.*
|
| 45 |
+
.cache
|
| 46 |
+
nosetests.xml
|
| 47 |
+
coverage.xml
|
| 48 |
+
*.cover
|
| 49 |
+
*.py,cover
|
| 50 |
+
.hypothesis/
|
| 51 |
+
.pytest_cache/
|
| 52 |
+
cover/
|
| 53 |
+
|
| 54 |
+
# Translations
|
| 55 |
+
*.mo
|
| 56 |
+
*.pot
|
| 57 |
+
|
| 58 |
+
# Django stuff:
|
| 59 |
+
*.log
|
| 60 |
+
local_settings.py
|
| 61 |
+
db.sqlite3
|
| 62 |
+
db.sqlite3-journal
|
| 63 |
+
|
| 64 |
+
# Flask stuff:
|
| 65 |
+
instance/
|
| 66 |
+
.webassets-cache
|
| 67 |
+
|
| 68 |
+
# Scrapy stuff:
|
| 69 |
+
.scrapy
|
| 70 |
+
|
| 71 |
+
# Sphinx documentation
|
| 72 |
+
docs/_build/
|
| 73 |
+
|
| 74 |
+
# PyBuilder
|
| 75 |
+
.pybuilder/
|
| 76 |
+
target/
|
| 77 |
+
|
| 78 |
+
# Jupyter Notebook
|
| 79 |
+
.ipynb_checkpoints
|
| 80 |
+
|
| 81 |
+
# IPython
|
| 82 |
+
profile_default/
|
| 83 |
+
ipython_config.py
|
| 84 |
+
|
| 85 |
+
# pyenv
|
| 86 |
+
# For a library or package, you might want to ignore these files since the code is
|
| 87 |
+
# intended to run in multiple environments; otherwise, check them in:
|
| 88 |
+
# .python-version
|
| 89 |
+
|
| 90 |
+
# pipenv
|
| 91 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
| 92 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
| 93 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
| 94 |
+
# install all needed dependencies.
|
| 95 |
+
#Pipfile.lock
|
| 96 |
+
|
| 97 |
+
# UV
|
| 98 |
+
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
|
| 99 |
+
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
| 100 |
+
# commonly ignored for libraries.
|
| 101 |
+
#uv.lock
|
| 102 |
+
|
| 103 |
+
# poetry
|
| 104 |
+
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
| 105 |
+
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
| 106 |
+
# commonly ignored for libraries.
|
| 107 |
+
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
| 108 |
+
#poetry.lock
|
| 109 |
+
|
| 110 |
+
# pdm
|
| 111 |
+
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
| 112 |
+
#pdm.lock
|
| 113 |
+
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
| 114 |
+
# in version control.
|
| 115 |
+
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
|
| 116 |
+
.pdm.toml
|
| 117 |
+
.pdm-python
|
| 118 |
+
.pdm-build/
|
| 119 |
+
|
| 120 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
| 121 |
+
__pypackages__/
|
| 122 |
+
|
| 123 |
+
# Celery stuff
|
| 124 |
+
celerybeat-schedule
|
| 125 |
+
celerybeat.pid
|
| 126 |
+
|
| 127 |
+
# SageMath parsed files
|
| 128 |
+
*.sage.py
|
| 129 |
+
|
| 130 |
+
# Environments
|
| 131 |
+
.env
|
| 132 |
+
.venv
|
| 133 |
+
env/
|
| 134 |
+
venv/
|
| 135 |
+
ENV/
|
| 136 |
+
env.bak/
|
| 137 |
+
venv.bak/
|
| 138 |
+
|
| 139 |
+
# Spyder project settings
|
| 140 |
+
.spyderproject
|
| 141 |
+
.spyproject
|
| 142 |
+
|
| 143 |
+
# Rope project settings
|
| 144 |
+
.ropeproject
|
| 145 |
+
|
| 146 |
+
# mkdocs documentation
|
| 147 |
+
/site
|
| 148 |
+
|
| 149 |
+
# mypy
|
| 150 |
+
.mypy_cache/
|
| 151 |
+
.dmypy.json
|
| 152 |
+
dmypy.json
|
| 153 |
+
|
| 154 |
+
# Pyre type checker
|
| 155 |
+
.pyre/
|
| 156 |
+
|
| 157 |
+
# pytype static type analyzer
|
| 158 |
+
.pytype/
|
| 159 |
+
|
| 160 |
+
# Cython debug symbols
|
| 161 |
+
cython_debug/
|
| 162 |
+
|
| 163 |
+
# PyCharm
|
| 164 |
+
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
| 165 |
+
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
| 166 |
+
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
| 167 |
+
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
| 168 |
+
#.idea/
|
| 169 |
+
|
| 170 |
+
# PyPI configuration file
|
| 171 |
+
.pypirc
|
| 172 |
+
|
| 173 |
+
# Vim files
|
| 174 |
+
*.swp
|
| 175 |
+
*.swo
|
| 176 |
+
*.un~
|
| 177 |
+
|
| 178 |
+
# Misc
|
| 179 |
+
.DS_Store
|
| 180 |
+
.mise.toml
|
| 181 |
+
.vscode/
|
| 182 |
+
.gradio/
|
| 183 |
+
|
| 184 |
+
.claude
|
Dockerfile
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM python:3.10-slim
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
# (0) Install SSH client tools (and git, if you're pulling via SSH)
|
| 5 |
+
RUN apt-get update && \
|
| 6 |
+
apt-get install -y --no-install-recommends openssh-client git && \
|
| 7 |
+
rm -rf /var/lib/apt/lists/*
|
| 8 |
+
|
| 9 |
+
# The two following lines are requirements for the Dev Mode to be functional
|
| 10 |
+
# Learn more about the Dev Mode at https://huggingface.co/dev-mode-explorers
|
| 11 |
+
RUN useradd -m -u 1000 user
|
| 12 |
+
WORKDIR /app
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
# (2) Copy dependencies manifest
|
| 16 |
+
COPY --chown=user requirements.txt requirements.txt
|
| 17 |
+
|
| 18 |
+
# (3) Install dependencies, mounting SSH keys and optional HTTPS creds
|
| 19 |
+
RUN --mount=type=secret,id=AGENTEVAL_DEPLOY_KEY,mode=0400,required=true \
|
| 20 |
+
--mount=type=secret,id=ASTABENCH_DEPLOY_KEY,mode=0400,required=true \
|
| 21 |
+
mkdir -p /root/.ssh && chmod 700 /root/.ssh && \
|
| 22 |
+
cat /run/secrets/AGENTEVAL_DEPLOY_KEY > /root/.ssh/id_ed25519 && chmod 600 /root/.ssh/id_ed25519 && \
|
| 23 |
+
cat /run/secrets/ASTABENCH_DEPLOY_KEY > /root/.ssh/id_astabench && chmod 600 /root/.ssh/id_astabench && \
|
| 24 |
+
ssh-keyscan github.com >> /root/.ssh/known_hosts && \
|
| 25 |
+
printf 'Host github.com\n User git\n IdentityFile /root/.ssh/id_ed25519\n IdentityFile /root/.ssh/id_astabench\n StrictHostKeyChecking no\n' >> /root/.ssh/config && \
|
| 26 |
+
# rewrite all GitHub HTTPS URLs to SSH so nested deps install via SSH
|
| 27 |
+
git config --global url."ssh://git@github.com/".insteadOf "https://github.com/" && \
|
| 28 |
+
pip install --no-cache-dir --upgrade -r requirements.txt
|
| 29 |
+
|
| 30 |
+
# (4) Copy in your Gradio app code
|
| 31 |
+
COPY . .
|
| 32 |
+
RUN mkdir -p /home/user/data && chown -R user:user /home/user/data
|
| 33 |
+
|
| 34 |
+
# Make the app treat this as non‑debug (so DATA_DIR=/home/user/data)
|
| 35 |
+
ENV system=spaces
|
| 36 |
+
|
| 37 |
+
# (5) Switch to a non-root user
|
| 38 |
+
USER user
|
| 39 |
+
|
| 40 |
+
# (6) Expose Gradio’s default port
|
| 41 |
+
EXPOSE 7860
|
| 42 |
+
|
| 43 |
+
# (7) Launch your app
|
| 44 |
+
CMD ["python", "app.py"]
|
README.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: OpenHands Index
|
| 3 |
+
emoji: 🤖
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: docker
|
| 7 |
+
app_file: app.py
|
| 8 |
+
pinned: true
|
| 9 |
+
license: apache-2.0
|
| 10 |
+
hf_oauth: true
|
| 11 |
+
app_port: 7860
|
| 12 |
+
failure_strategy: none
|
| 13 |
+
tags:
|
| 14 |
+
- leaderboard
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
## OpenHands Index Leaderboard
|
| 18 |
+
|
| 19 |
+
This leaderboard tracks agent performance across multiple software engineering and AI benchmarks.
|
| 20 |
+
|
| 21 |
+
## Development
|
| 22 |
+
The leaderboard is built using the [HuggingFace Datasets](https://huggingface.co/docs/datasets/index) library, which provides a convenient way to manage and query datasets.
|
| 23 |
+
Results are sourced from the [OpenHands Index Results](https://github.com/OpenHands/openhands-index-results) repository.
|
| 24 |
+
|
| 25 |
+
To run the leaderboard locally first make sure to set this env variable:
|
| 26 |
+
```bash
|
| 27 |
+
export IS_INTERNAL=true
|
| 28 |
+
```
|
| 29 |
+
You can then start it up with the following command:
|
| 30 |
+
```bash
|
| 31 |
+
python app.py
|
| 32 |
+
```
|
| 33 |
+
This will start a local server that you can access in your web browser at `http://localhost:7860`.
|
| 34 |
+
|
| 35 |
+
## Hugging Face Integration
|
| 36 |
+
The repo backs two Hugging Face leaderboard spaces:
|
| 37 |
+
- https://huggingface.co/spaces/allenai/asta-bench-internal-leaderboard
|
| 38 |
+
- https://huggingface.co/spaces/allenai/asta-bench-leaderboard
|
| 39 |
+
|
| 40 |
+
Please follow the steps below to push changes to the leaderboards on Hugging Face.
|
| 41 |
+
|
| 42 |
+
Before pushing, make sure to merge your changes to the `main` branch of this repository. (following the standard GitHub workflow of creating a branch, making changes, and then merging it back to `main`).
|
| 43 |
+
|
| 44 |
+
Before pushing for the first time, you'll need to add the Hugging Face remote repositories if you haven't done so already. You can do this by running the following commands:
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
git remote add huggingface https://huggingface.co/spaces/allenai/asta-bench-internal-leaderboard
|
| 48 |
+
git remote add huggingface-public https://huggingface.co/spaces/allenai/asta-bench-leaderboard
|
| 49 |
+
```
|
| 50 |
+
You can verify that the remotes have been added by running:
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
git remote -v
|
| 54 |
+
```
|
| 55 |
+
Then, to push the changes to the Hugging Face leaderboards, you can use the following commands:
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
git push huggingface main:main
|
| 59 |
+
git push huggingface-public main:main
|
| 60 |
+
```
|
about.py
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
def build_page():
|
| 5 |
+
with gr.Column(elem_id="about-page-content-wrapper"):
|
| 6 |
+
# --- Section 1: About AstaBench ---
|
| 7 |
+
gr.HTML(
|
| 8 |
+
"""
|
| 9 |
+
<h2>About AstaBench</h2>
|
| 10 |
+
<p>
|
| 11 |
+
AstaBench is a novel AI agents evaluation framework, providing a challenging new test for AI agents: the first benchmark challenge that evaluates agents’ scientific abilities on a broad spectrum of research skills, including literature understanding, data analysis, planning, tool use, coding, and search. Asta’s set of standard tools makes it easy to build general-purpose science agents and to compare their performance in an apples-to-apples manner.
|
| 12 |
+
</p>
|
| 13 |
+
"""
|
| 14 |
+
)
|
| 15 |
+
gr.Markdown("---", elem_classes="divider-line")
|
| 16 |
+
|
| 17 |
+
# --- Section 2: Why AstaBench? ---
|
| 18 |
+
gr.HTML(
|
| 19 |
+
"""
|
| 20 |
+
<h2>Why AstaBench?</h2>
|
| 21 |
+
<p>
|
| 22 |
+
Most current benchmarks test agentic AI and isolated aspects of scientific reasoning, but rarely evaluate AI agentic behavior rigorously or capture the full skill set scientific research requires. Agents can appear effective despite inconsistent results and high compute use, often outperforming others by consuming more resources. Advancing scientific AI requires evaluations that emphasize reproducibility, efficiency, and the real complexity of research.
|
| 23 |
+
</p>
|
| 24 |
+
<br>
|
| 25 |
+
<p>
|
| 26 |
+
AstaBench fills this gap: an agents evaluation framework and suite of open benchmarks for evaluating scientific AI assistants on core scientific tasks that require novel reasoning. AstaBench helps scientists identify which agents best support their needs through task-relevant leaderboards, while giving AI developers a standard execution environment and tools to test the scientific reasoning capabilities of their agents compared to well-known baselines from the literature, including both open and closed LLM foundation models.
|
| 27 |
+
</p>
|
| 28 |
+
"""
|
| 29 |
+
)
|
| 30 |
+
gr.Markdown("---", elem_classes="divider-line")
|
| 31 |
+
|
| 32 |
+
# --- Section 3: What Does AstaBench Include? ---
|
| 33 |
+
gr.HTML(
|
| 34 |
+
"""
|
| 35 |
+
<h2>What Does AstaBench Include?</h2>
|
| 36 |
+
<p>
|
| 37 |
+
AstaBench includes a rigorous agents evaluation framework and a suite of benchmarks consisting of over 2,400 problems across 11 benchmarks, organized into four core categories:
|
| 38 |
+
</p>
|
| 39 |
+
<ul class="info-list">
|
| 40 |
+
<li>Literature Understanding</li>
|
| 41 |
+
<li>Code & Execution</li>
|
| 42 |
+
<li>Data Analysis</li>
|
| 43 |
+
<li>End-to-End Discovery</li>
|
| 44 |
+
</ul>
|
| 45 |
+
<p>
|
| 46 |
+
Plus: a large suite of integrated agents and leaderboards with results from extensive evaluation of agents and models.
|
| 47 |
+
</p>
|
| 48 |
+
<p>
|
| 49 |
+
🔍 Learn more in the <a href="https://allenai.org/blog/astabench" target="_blank" class="primary-link-button">AstaBench technical blog post</a>
|
| 50 |
+
</p>
|
| 51 |
+
"""
|
| 52 |
+
)
|
| 53 |
+
gr.Markdown("---", elem_classes="divider-line")
|
| 54 |
+
|
| 55 |
+
# --- Section 4: Understanding the Leaderboards ---
|
| 56 |
+
gr.HTML(
|
| 57 |
+
"""
|
| 58 |
+
<h2>Understanding the Leaderboards</h2>
|
| 59 |
+
<p>
|
| 60 |
+
The AstaBench Overall Leaderboard provides a high-level view of overall agent performance and efficiency:
|
| 61 |
+
</p>
|
| 62 |
+
<ul class="info-list">
|
| 63 |
+
<li>Overall score: A macro-average of the four category-level averages (equal weighting)</li>
|
| 64 |
+
<li>Overall cost: Average cost per task, aggregated only across benchmarks with reported cost</li>
|
| 65 |
+
</ul>
|
| 66 |
+
<p>
|
| 67 |
+
Each category leaderboard provides:
|
| 68 |
+
</p>
|
| 69 |
+
<ul class="info-list">
|
| 70 |
+
<li>Average score and cost for that category (macro-averaged across the benchmarks in the category)</li>
|
| 71 |
+
<li>A breakdown by individual benchmarks</li>
|
| 72 |
+
</ul>
|
| 73 |
+
"""
|
| 74 |
+
)
|
| 75 |
+
gr.Markdown("---", elem_classes="divider-line")
|
| 76 |
+
|
| 77 |
+
# --- Section 5: Scoring & Aggregation ---
|
| 78 |
+
gr.HTML(
|
| 79 |
+
"""
|
| 80 |
+
<h2>Scoring & Aggregation</h2>
|
| 81 |
+
<p>
|
| 82 |
+
AstaBench encourages careful, transparent evaluation. Here's how we handle scoring, cost, and partial results:
|
| 83 |
+
</p>
|
| 84 |
+
|
| 85 |
+
<h3>Scores</h3>
|
| 86 |
+
<ul class="info-list">
|
| 87 |
+
<li>Each benchmark returns an average score based on per-problem scores</li>
|
| 88 |
+
<li>All scores are aggregated upward using macro-averaging</li>
|
| 89 |
+
<li>Partial completions are included (even with poor performance)</li>
|
| 90 |
+
</ul>
|
| 91 |
+
|
| 92 |
+
<h3>Cost</h3>
|
| 93 |
+
<ul class="info-list">
|
| 94 |
+
<li>Costs are reported in USD per task.</li>
|
| 95 |
+
<li>Benchmarks without cost data are excluded from cost averages</li>
|
| 96 |
+
<li>In scatter plots, agents without cost are plotted to the far right and clearly marked.</li>
|
| 97 |
+
</ul>
|
| 98 |
+
|
| 99 |
+
<p>
|
| 100 |
+
<em>Note: Cost values reflect pricing and infrastructure conditions at a fixed point in time. We recognize that compute costs may change over time and vary by provider, and are actively working on methods to keep costs up-to-date and normalized for fair comparisons.</em>
|
| 101 |
+
</p>
|
| 102 |
+
|
| 103 |
+
<h3>Coverage</h3>
|
| 104 |
+
<ul class="info-list">
|
| 105 |
+
<li>Main leaderboard: category coverage (X/4)</li>
|
| 106 |
+
<li>Category view: benchmark coverage (X/Y)</li>
|
| 107 |
+
<li>Incomplete coverage is flagged visually</li>
|
| 108 |
+
</ul>
|
| 109 |
+
|
| 110 |
+
<p>
|
| 111 |
+
These design choices ensure fair comparison while penalizing cherry-picking and omissions.
|
| 112 |
+
</p>
|
| 113 |
+
"""
|
| 114 |
+
)
|
| 115 |
+
gr.Markdown("---", elem_classes="divider-line")
|
| 116 |
+
|
| 117 |
+
# --- Section 6: Learn More ---
|
| 118 |
+
gr.HTML(
|
| 119 |
+
"""
|
| 120 |
+
<div class="learn-more-section">
|
| 121 |
+
<h2>Learn More</h2>
|
| 122 |
+
<div class="link-buttons-container">
|
| 123 |
+
|
| 124 |
+
<a href="https://allenai.org/blog/astabench" target="_blank" class="link-button">
|
| 125 |
+
<span style="color:#0fcb8c;">AstaBench technical blog post</span>
|
| 126 |
+
<span class="external-link-icon">↗</span>
|
| 127 |
+
</a>
|
| 128 |
+
|
| 129 |
+
<a href="/submit" target="_blank" class="link-button">
|
| 130 |
+
<span style="color:#0fcb8c;">Submit an agent for evaluation</span>
|
| 131 |
+
<span class="external-link-icon">↗</span>
|
| 132 |
+
</a>
|
| 133 |
+
|
| 134 |
+
</div>
|
| 135 |
+
</div>
|
| 136 |
+
"""
|
| 137 |
+
)
|
| 138 |
+
# Floating feedback button
|
| 139 |
+
floating_feedback_button_html = """
|
| 140 |
+
<div>
|
| 141 |
+
<a id="feedback-button" href="https://docs.google.com/forms/d/e/1FAIpQLSfJdVkD62aPYh8XehN2FrSeHUWt488Ejc-QdtuZn5NZ3eNoxA/viewform">Have feedback?</a>
|
| 142 |
+
</div>
|
| 143 |
+
"""
|
| 144 |
+
gr.HTML(floating_feedback_button_html)
|
aliases.py
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from agenteval.config import (
|
| 2 |
+
OPENNESS_OPEN_SOURCE_OPEN_WEIGHTS as CANONICAL_OPENNESS_OPEN_SOURCE_OPEN_WEIGHTS,
|
| 3 |
+
OPENNESS_OPEN_SOURCE_CLOSED_WEIGHTS as CANONICAL_OPENNESS_OPEN_SOURCE_CLOSED_WEIGHTS,
|
| 4 |
+
OPENNESS_CLOSED_API_AVAILABLE as CANONICAL_OPENNESS_CLOSED_API_AVAILABLE,
|
| 5 |
+
OPENNESS_CLOSED_UI_ONLY as CANONICAL_OPENNESS_CLOSED_UI_ONLY,
|
| 6 |
+
TOOL_USAGE_STANDARD as CANONICAL_TOOL_USAGE_STANDARD,
|
| 7 |
+
TOOL_USAGE_CUSTOM_INTERFACE as CANONICAL_TOOL_USAGE_CUSTOM_INTERFACE,
|
| 8 |
+
TOOL_USAGE_FULLY_CUSTOM as CANONICAL_TOOL_USAGE_FULLY_CUSTOM,
|
| 9 |
+
)
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
OPENNESS_ALIASES = {
|
| 13 |
+
CANONICAL_OPENNESS_OPEN_SOURCE_OPEN_WEIGHTS: {"Open Source + Open Weights"},
|
| 14 |
+
CANONICAL_OPENNESS_OPEN_SOURCE_CLOSED_WEIGHTS: {"Open Source"},
|
| 15 |
+
CANONICAL_OPENNESS_CLOSED_API_AVAILABLE: {"API Available"},
|
| 16 |
+
CANONICAL_OPENNESS_CLOSED_UI_ONLY: {"Closed"}
|
| 17 |
+
}
|
| 18 |
+
|
| 19 |
+
TOOL_USAGE_ALIASES = {
|
| 20 |
+
CANONICAL_TOOL_USAGE_STANDARD: {},
|
| 21 |
+
CANONICAL_TOOL_USAGE_CUSTOM_INTERFACE: {"Custom with Standard Search"},
|
| 22 |
+
CANONICAL_TOOL_USAGE_FULLY_CUSTOM: {"Fully Custom"}
|
| 23 |
+
}
|
app.py
ADDED
|
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# app.py
|
| 2 |
+
import logging
|
| 3 |
+
|
| 4 |
+
logging.basicConfig(level=logging.WARNING)
|
| 5 |
+
|
| 6 |
+
import gradio as gr
|
| 7 |
+
import urllib.parse
|
| 8 |
+
|
| 9 |
+
from apscheduler.schedulers.background import BackgroundScheduler
|
| 10 |
+
from huggingface_hub import HfApi
|
| 11 |
+
|
| 12 |
+
from config import LEADERBOARD_PATH, LOCAL_DEBUG
|
| 13 |
+
from content import css
|
| 14 |
+
from main_page import build_page as build_main_page
|
| 15 |
+
from literature_understanding import build_page as build_lit_page
|
| 16 |
+
from c_and_e import build_page as build_c_and_e_page
|
| 17 |
+
from data_analysis import build_page as build_data_analysis_page
|
| 18 |
+
from e2e import build_page as build_e2e_page
|
| 19 |
+
from submission import build_page as build_submission_page
|
| 20 |
+
from about import build_page as build_about_page
|
| 21 |
+
|
| 22 |
+
api = HfApi()
|
| 23 |
+
LOGO_PATH = "assets/logo.svg"
|
| 24 |
+
# JavaScripts
|
| 25 |
+
scroll_script = """
|
| 26 |
+
<script>
|
| 27 |
+
function scroll_to_element(id) {
|
| 28 |
+
console.log("Global scroll_to_element called for ID:", id);
|
| 29 |
+
const element = document.querySelector('#' + id);
|
| 30 |
+
if (element) {
|
| 31 |
+
console.log("Element found:", element);
|
| 32 |
+
element.scrollIntoView({ behavior: 'smooth', block: 'start' });
|
| 33 |
+
} else {
|
| 34 |
+
console.error("Error: Element with ID '" + id + "' not found in the document.");
|
| 35 |
+
}
|
| 36 |
+
}
|
| 37 |
+
</script>
|
| 38 |
+
"""
|
| 39 |
+
redirect_script = """
|
| 40 |
+
<script>
|
| 41 |
+
if (window.location.pathname === '/') { window.location.replace('/home'); }
|
| 42 |
+
</script>
|
| 43 |
+
"""
|
| 44 |
+
tooltip_script = """
|
| 45 |
+
<script>
|
| 46 |
+
function initializeSmartTooltips() {
|
| 47 |
+
// Find all tooltip trigger icons
|
| 48 |
+
const tooltipIcons = document.querySelectorAll('.tooltip-icon-legend');
|
| 49 |
+
|
| 50 |
+
tooltipIcons.forEach(icon => {
|
| 51 |
+
// Find the tooltip card associated with this icon
|
| 52 |
+
const tooltipCard = icon.querySelector('.tooltip-card');
|
| 53 |
+
if (!tooltipCard) return;
|
| 54 |
+
|
| 55 |
+
// Move the card to the end of the <body>. This is the KEY to escaping
|
| 56 |
+
// any parent containers that might clip it.
|
| 57 |
+
document.body.appendChild(tooltipCard);
|
| 58 |
+
|
| 59 |
+
// --- MOUSE HOVER EVENT ---
|
| 60 |
+
icon.addEventListener('mouseenter', () => {
|
| 61 |
+
// Get the exact position of the icon on the screen
|
| 62 |
+
const iconRect = icon.getBoundingClientRect();
|
| 63 |
+
// Get the dimensions of the tooltip card
|
| 64 |
+
const cardRect = tooltipCard.getBoundingClientRect();
|
| 65 |
+
|
| 66 |
+
// Calculate the ideal top position (above the icon with a 10px gap)
|
| 67 |
+
const top = iconRect.top - cardRect.height - 10;
|
| 68 |
+
|
| 69 |
+
// --- Smart Centering Logic ---
|
| 70 |
+
// Start by calculating the perfect center
|
| 71 |
+
let left = iconRect.left + (iconRect.width / 2) - (cardRect.width / 2);
|
| 72 |
+
|
| 73 |
+
// Check if it's going off the left edge of the screen
|
| 74 |
+
if (left < 10) {
|
| 75 |
+
left = 10; // Pin it to the left with a 10px margin
|
| 76 |
+
}
|
| 77 |
+
// Check if it's going off the right edge of the screen
|
| 78 |
+
if (left + cardRect.width > window.innerWidth) {
|
| 79 |
+
left = window.innerWidth - cardRect.width - 10; // Pin it to the right
|
| 80 |
+
}
|
| 81 |
+
|
| 82 |
+
// Apply the calculated position and show the card
|
| 83 |
+
tooltipCard.style.top = `${top}px`;
|
| 84 |
+
tooltipCard.style.left = `${left}px`;
|
| 85 |
+
tooltipCard.classList.add('visible');
|
| 86 |
+
});
|
| 87 |
+
|
| 88 |
+
// --- MOUSE LEAVE EVENT ---
|
| 89 |
+
icon.addEventListener('mouseleave', () => {
|
| 90 |
+
// Hide the card
|
| 91 |
+
tooltipCard.classList.remove('visible');
|
| 92 |
+
});
|
| 93 |
+
});
|
| 94 |
+
}
|
| 95 |
+
|
| 96 |
+
// Poll the page until the tooltips exist, then run the initialization.
|
| 97 |
+
const tooltipInterval = setInterval(() => {
|
| 98 |
+
if (document.querySelector('.tooltip-icon-legend')) {
|
| 99 |
+
clearInterval(tooltipInterval);
|
| 100 |
+
initializeSmartTooltips();
|
| 101 |
+
}
|
| 102 |
+
}, 200);
|
| 103 |
+
</script>
|
| 104 |
+
"""
|
| 105 |
+
redirect_submission_on_close_script = """
|
| 106 |
+
<script>
|
| 107 |
+
function initializeRedirectObserver() {
|
| 108 |
+
const successModal = document.querySelector('#success-modal');
|
| 109 |
+
|
| 110 |
+
if (successModal) {
|
| 111 |
+
const observer = new MutationObserver((mutationsList) => {
|
| 112 |
+
for (const mutation of mutationsList) {
|
| 113 |
+
// We only care about changes to the 'class' attribute.
|
| 114 |
+
if (mutation.type === 'attributes' && mutation.attributeName === 'class') {
|
| 115 |
+
|
| 116 |
+
// Check if the 'hide' class has been ADDED to the class list.
|
| 117 |
+
// This is how Gradio hides the modal.
|
| 118 |
+
if (successModal.classList.contains('hide')) {
|
| 119 |
+
console.log("Success modal was closed. Redirecting to homepage...");
|
| 120 |
+
// This is the command to redirect the browser.
|
| 121 |
+
window.location.href = '/home';
|
| 122 |
+
}
|
| 123 |
+
}
|
| 124 |
+
}
|
| 125 |
+
});
|
| 126 |
+
|
| 127 |
+
// Tell the observer to watch the modal for attribute changes.
|
| 128 |
+
observer.observe(successModal, { attributes: true });
|
| 129 |
+
}
|
| 130 |
+
}
|
| 131 |
+
|
| 132 |
+
// Polling mechanism to wait for Gradio to build the UI.
|
| 133 |
+
const redirectInterval = setInterval(() => {
|
| 134 |
+
if (document.querySelector('#success-modal')) {
|
| 135 |
+
clearInterval(redirectInterval);
|
| 136 |
+
initializeRedirectObserver();
|
| 137 |
+
}
|
| 138 |
+
}, 200);
|
| 139 |
+
</script>
|
| 140 |
+
"""
|
| 141 |
+
# --- Theme Definition ---
|
| 142 |
+
theme = gr.themes.Base(
|
| 143 |
+
primary_hue=gr.themes.Color(c100="#CFF5E8", c200="#B7EFDD", c300="#9FEAD1", c400="#87E5C5", c50="#E7FAF3", c500="#6FE0BA", c600="#57DBAF", c700="#3FD5A3", c800="#27D09C", c900="#0FCB8C", c950="#0fcb8c"),
|
| 144 |
+
secondary_hue=gr.themes.Color(c100="#FCDCEB", c200="#FBCBE1", c300="#F9BAD7", c400="#F7A8CD", c50="#FDEEF5", c500="#F697C4", c600="#F586BA", c700="#F375B0", c800="#F263A6", c900="#F0529C", c950="#F0529C"),
|
| 145 |
+
neutral_hue=gr.themes.Color(c100="#FDF9F4", c200="#C9C9C3", c300="#B0B5AF", c400="#97A09C", c50="#FAF2E9", c500="#7F8C89", c600="#667876", c700="#344F4F", c800="#1C3A3C", c900="#032629", c950="032629"),
|
| 146 |
+
font=[gr.themes.GoogleFont('Manrope'), 'ui-sans-serif', 'sans-serif', 'sans-serif'],
|
| 147 |
+
font_mono=[gr.themes.GoogleFont('Roboto Mono'), 'ui-monospace', 'monospace', 'monospace'],
|
| 148 |
+
).set(
|
| 149 |
+
body_text_color='*neutral_950',
|
| 150 |
+
body_text_color_subdued='*neutral_950',
|
| 151 |
+
body_text_color_subdued_dark='*neutral_50',
|
| 152 |
+
body_text_color_dark='*neutral_50',
|
| 153 |
+
background_fill_primary='*neutral_50',
|
| 154 |
+
background_fill_primary_dark='*neutral_900',
|
| 155 |
+
background_fill_secondary='*neutral_100',
|
| 156 |
+
background_fill_secondary_dark='*neutral_800',
|
| 157 |
+
border_color_accent='*secondary_900',
|
| 158 |
+
border_color_accent_subdued='*neutral_400',
|
| 159 |
+
border_color_accent_subdued_dark='*neutral_400',
|
| 160 |
+
color_accent='*primary_900',
|
| 161 |
+
color_accent_soft='*neutral_200',
|
| 162 |
+
color_accent_soft_dark='*neutral_800',
|
| 163 |
+
link_text_color='*secondary_900',
|
| 164 |
+
link_text_color_dark='*primary_900',
|
| 165 |
+
link_text_color_active_dark='*primary_600',
|
| 166 |
+
link_text_color_hover_dark='*primary_700',
|
| 167 |
+
link_text_color_visited_dark='*primary_600',
|
| 168 |
+
table_even_background_fill='*neutral_100',
|
| 169 |
+
table_even_background_fill_dark='*neutral_800',
|
| 170 |
+
button_primary_background_fill='*secondary_900',
|
| 171 |
+
button_primary_background_fill_dark='*primary_900',
|
| 172 |
+
button_primary_background_fill_hover='*secondary_600',
|
| 173 |
+
button_primary_background_fill_hover_dark='*primary_600',
|
| 174 |
+
button_secondary_background_fill="#9FEAD1",
|
| 175 |
+
button_secondary_background_fill_dark="#9FEAD1",
|
| 176 |
+
button_secondary_text_color="*neutral_900",
|
| 177 |
+
button_secondary_text_color_dark="*neutral_900",
|
| 178 |
+
block_title_text_color="*neutral_900",
|
| 179 |
+
button_primary_text_color='*neutral_900',
|
| 180 |
+
block_title_text_color_dark="#ffffff",
|
| 181 |
+
button_primary_text_color_dark='*neutral_900',
|
| 182 |
+
block_border_color="#032629",
|
| 183 |
+
block_border_color_dark="#9fead1",
|
| 184 |
+
block_background_fill_dark="#032629",
|
| 185 |
+
block_background_fill="#FAF2E9",
|
| 186 |
+
checkbox_label_text_color="#032629",
|
| 187 |
+
checkbox_label_background_fill="#D8D6CF",
|
| 188 |
+
checkbox_label_background_fill_dark="#254243",
|
| 189 |
+
checkbox_background_color_selected="#F0529C",
|
| 190 |
+
checkbox_background_color_selected_dark="#0FCB8C",
|
| 191 |
+
)
|
| 192 |
+
try:
|
| 193 |
+
with open(LOGO_PATH, "r") as f:
|
| 194 |
+
svg_content = f.read()
|
| 195 |
+
encoded_svg = urllib.parse.quote(svg_content)
|
| 196 |
+
home_icon_data_uri = f"data:image/svg+xml,{encoded_svg}"
|
| 197 |
+
except FileNotFoundError:
|
| 198 |
+
print(f"Warning: Home icon file not found at {LOGO_PATH}.")
|
| 199 |
+
home_icon_data_uri = "none"
|
| 200 |
+
|
| 201 |
+
# --- This is the final CSS ---
|
| 202 |
+
final_css = css + f"""
|
| 203 |
+
/* --- Find the "Home" button and replace its text with an icon --- */
|
| 204 |
+
.nav-holder nav a[href$="/"] {{
|
| 205 |
+
display: none !important;
|
| 206 |
+
}}
|
| 207 |
+
.nav-holder nav a[href*="/home"] {{
|
| 208 |
+
grid-row: 1 !important;
|
| 209 |
+
grid-column: 1 !important;
|
| 210 |
+
justify-self: start !important;
|
| 211 |
+
display: flex !important;
|
| 212 |
+
align-items: center !important;
|
| 213 |
+
justify-content: center !important;
|
| 214 |
+
|
| 215 |
+
/* 2. Hide the original "Home" text */
|
| 216 |
+
font-size: 0 !important;
|
| 217 |
+
text-indent: -9999px;
|
| 218 |
+
|
| 219 |
+
/* 3. Apply the icon as the background */
|
| 220 |
+
background-image: url("{home_icon_data_uri}") !important;
|
| 221 |
+
background-size: contain !important;
|
| 222 |
+
background-repeat: no-repeat !important;
|
| 223 |
+
background-position: center !important;
|
| 224 |
+
|
| 225 |
+
width: 240px !important;
|
| 226 |
+
height: 50px !important;
|
| 227 |
+
padding: 0 !important;
|
| 228 |
+
border: none !important;
|
| 229 |
+
outline: none !important;
|
| 230 |
+
}}
|
| 231 |
+
"""
|
| 232 |
+
# --- Gradio App Definition ---
|
| 233 |
+
demo = gr.Blocks(
|
| 234 |
+
theme=theme,
|
| 235 |
+
css=final_css,
|
| 236 |
+
head=scroll_script + redirect_script + tooltip_script + redirect_submission_on_close_script,
|
| 237 |
+
title="OpenHands Index",
|
| 238 |
+
)
|
| 239 |
+
with demo.route("Home", "/home"):
|
| 240 |
+
build_main_page()
|
| 241 |
+
|
| 242 |
+
with demo.route("Literature Understanding", "/literature-understanding"):
|
| 243 |
+
build_lit_page()
|
| 244 |
+
|
| 245 |
+
with demo.route("Code & Execution", "/code-execution"):
|
| 246 |
+
build_c_and_e_page()
|
| 247 |
+
|
| 248 |
+
with demo.route("Data Analysis", "/data-analysis"):
|
| 249 |
+
build_data_analysis_page()
|
| 250 |
+
|
| 251 |
+
with demo.route("End-to-End Discovery", "/discovery"):
|
| 252 |
+
build_e2e_page()
|
| 253 |
+
|
| 254 |
+
with demo.route("About", "/about"):
|
| 255 |
+
build_about_page()
|
| 256 |
+
|
| 257 |
+
with demo.route("🚀 Submit an Agent", "/submit"):
|
| 258 |
+
build_submission_page()
|
| 259 |
+
# --- Scheduler and Launch
|
| 260 |
+
def restart_space_job():
|
| 261 |
+
print("Scheduler: Attempting to restart space.")
|
| 262 |
+
try:
|
| 263 |
+
api.restart_space(repo_id=LEADERBOARD_PATH)
|
| 264 |
+
print("Scheduler: Space restart request sent.")
|
| 265 |
+
except Exception as e:
|
| 266 |
+
print(f"Scheduler: Error restarting space: {e}")
|
| 267 |
+
scheduler = BackgroundScheduler(timezone="UTC")
|
| 268 |
+
scheduler.add_job(restart_space_job, "interval", hours=1)
|
| 269 |
+
scheduler.start()
|
| 270 |
+
|
| 271 |
+
|
| 272 |
+
# Launch the Gradio app
|
| 273 |
+
if __name__ == "__main__":
|
| 274 |
+
if LOCAL_DEBUG:
|
| 275 |
+
print("Launching in LOCAL_DEBUG mode.")
|
| 276 |
+
demo.launch(debug=True, allowed_paths=["assets"], favicon_path="assets/favicon/favicon.ico")
|
| 277 |
+
else:
|
| 278 |
+
print("Launching in Space mode.")
|
| 279 |
+
# For Spaces, share=False is typical unless specific tunneling is needed.
|
| 280 |
+
# debug=True can be set to False for a "production" Space.
|
| 281 |
+
demo.launch(server_name="0.0.0.0", server_port=7860, debug=True, share=False, allowed_paths=["assets"], favicon_path="assets/favicon/favicon.ico")
|
| 282 |
+
|
assets/api-custom.svg
ADDED
|
|
assets/api-equivalent.svg
ADDED
|
|
assets/api-legend.svg
ADDED
|
|
assets/api-standard.svg
ADDED
|
|
assets/c-custom.svg
ADDED
|
|
assets/c-equivalent.svg
ADDED
|
|
assets/c-legend.svg
ADDED
|
|
assets/c-standard.svg
ADDED
|
|
assets/code-execution.svg
ADDED
|
|
assets/custom-legend.svg
ADDED
|
|
assets/data-analysis.svg
ADDED
|
|
assets/ellipse-coral.svg
ADDED
|
|
assets/ellipse-pink.svg
ADDED
|
|
assets/ellipse-white.svg
ADDED
|
|
assets/ellipse-yellow.svg
ADDED
|
|
assets/end-to-end-discovery.svg
ADDED
|
|
assets/equivalent-legend.svg
ADDED
|
|
assets/favicon/favicon.ico
ADDED
|
|
assets/five-point-star.svg
ADDED
|
|
assets/four-point-star.svg
ADDED
|
|
assets/just-icon.svg
ADDED
|
|
assets/literature-understanding.svg
ADDED
|
|
assets/logo.svg
ADDED
|
|
assets/openhands-logo.svg
ADDED
|
|
assets/os-custom.svg
ADDED
|
|
assets/os-equivalent.svg
ADDED
|
|
assets/os-legend.svg
ADDED
|
|
assets/os-ow-custom.svg
ADDED
|
|
assets/os-ow-equivalent.svg
ADDED
|
|
assets/os-ow-legend.svg
ADDED
|
|
assets/os-ow-standard.svg
ADDED
|
|
assets/os-standard.svg
ADDED
|
|
assets/overall.svg
ADDED
|
|
assets/pareto.svg
ADDED
|
|
assets/standard-legend.svg
ADDED
|
|
assets/three-point-star.svg
ADDED
|
|
assets/trophy.svg
ADDED
|
|
assets/up-arrow.svg
ADDED
|
|
c_and_e.py
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
from content import CODE_EXECUTION_DESCRIPTION
|
| 3 |
+
from category_page_builder import build_category_page
|
| 4 |
+
|
| 5 |
+
# Define the category for this page
|
| 6 |
+
CATEGORY_NAME = "Code & Execution"
|
| 7 |
+
|
| 8 |
+
def build_page():
|
| 9 |
+
build_category_page(CATEGORY_NAME, CODE_EXECUTION_DESCRIPTION)
|
category_page_builder.py
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import pandas as pd
|
| 3 |
+
|
| 4 |
+
# Import our UI factories and the data loader
|
| 5 |
+
from ui_components import create_leaderboard_display, create_benchmark_details_display, get_full_leaderboard_data, create_sub_navigation_bar
|
| 6 |
+
CATEGORY_DIAGRAM_MAP = {
|
| 7 |
+
"Literature Understanding": "assets/literature-understanding.svg",
|
| 8 |
+
"Code & Execution": "assets/code-execution.svg",
|
| 9 |
+
"Data Analysis": "assets/data-analysis.svg",
|
| 10 |
+
"End-to-End Discovery": "assets/end-to-end-discovery.svg",
|
| 11 |
+
}
|
| 12 |
+
|
| 13 |
+
def build_category_page(CATEGORY_NAME, PAGE_DESCRIPTION):
|
| 14 |
+
with gr.Column(elem_id="page-content-wrapper"):
|
| 15 |
+
validation_df, validation_tag_map = get_full_leaderboard_data("validation")
|
| 16 |
+
test_df, test_tag_map = get_full_leaderboard_data("test")
|
| 17 |
+
with gr.Row(elem_id="intro-row"):
|
| 18 |
+
|
| 19 |
+
with gr.Column(scale=1):
|
| 20 |
+
gr.HTML(f'<h2>AstaBench {CATEGORY_NAME} Leaderboard <span style="font-weight: normal; color: inherit;">(Aggregate)</span></h2>', elem_id="main-header")
|
| 21 |
+
with gr.Column(elem_id="validation_nav_container", visible=False) as validation_nav_container:
|
| 22 |
+
create_sub_navigation_bar(validation_tag_map, CATEGORY_NAME, validation=True)
|
| 23 |
+
|
| 24 |
+
with gr.Column(elem_id="test_nav_container", visible=True) as test_nav_container:
|
| 25 |
+
create_sub_navigation_bar(test_tag_map, CATEGORY_NAME)
|
| 26 |
+
|
| 27 |
+
gr.Markdown(PAGE_DESCRIPTION, elem_id="intro-category-paragraph")
|
| 28 |
+
|
| 29 |
+
# --- The Right Column ---
|
| 30 |
+
with gr.Column(scale=1):
|
| 31 |
+
image_path = CATEGORY_DIAGRAM_MAP.get(CATEGORY_NAME)
|
| 32 |
+
if image_path:
|
| 33 |
+
gr.Image(
|
| 34 |
+
value=image_path,
|
| 35 |
+
show_label=False,
|
| 36 |
+
show_download_button=False,
|
| 37 |
+
show_fullscreen_button=False,
|
| 38 |
+
show_share_button=False,
|
| 39 |
+
interactive=False,
|
| 40 |
+
elem_id="diagram-image"
|
| 41 |
+
)
|
| 42 |
+
# --- This page now has two main sections: Validation and Test ---
|
| 43 |
+
with gr.Tabs():
|
| 44 |
+
with gr.Tab("Results: Test Set") as test_tab:
|
| 45 |
+
# Repeat the process for the "test" split
|
| 46 |
+
if not test_df.empty:
|
| 47 |
+
gr.Markdown("**Test Set** results are reserved for final assessment. This helps ensure that the agent generalizes well to unseen problems.")
|
| 48 |
+
create_leaderboard_display(
|
| 49 |
+
full_df=test_df,
|
| 50 |
+
tag_map=test_tag_map,
|
| 51 |
+
category_name=CATEGORY_NAME,
|
| 52 |
+
split_name="test"
|
| 53 |
+
)
|
| 54 |
+
create_benchmark_details_display(
|
| 55 |
+
full_df=test_df,
|
| 56 |
+
tag_map=test_tag_map,
|
| 57 |
+
category_name=CATEGORY_NAME,
|
| 58 |
+
validation=False,
|
| 59 |
+
)
|
| 60 |
+
else:
|
| 61 |
+
gr.Markdown("No data available for test split.")
|
| 62 |
+
with gr.Tab("Results: Validation Set") as validation_tab:
|
| 63 |
+
# 1. Load all necessary data for the "validation" split ONCE.
|
| 64 |
+
if not validation_df.empty:
|
| 65 |
+
gr.Markdown("**Validation Set** results are used during development to tune and compare agents before final testing.")
|
| 66 |
+
# 2. Render the main category display using the loaded data.
|
| 67 |
+
create_leaderboard_display(
|
| 68 |
+
full_df=validation_df,
|
| 69 |
+
tag_map=validation_tag_map,
|
| 70 |
+
category_name=CATEGORY_NAME,
|
| 71 |
+
split_name="validation"
|
| 72 |
+
)
|
| 73 |
+
|
| 74 |
+
# 3. Render the detailed breakdown for each benchmark in the category.
|
| 75 |
+
create_benchmark_details_display(
|
| 76 |
+
full_df=validation_df,
|
| 77 |
+
tag_map=validation_tag_map,
|
| 78 |
+
category_name=CATEGORY_NAME,
|
| 79 |
+
validation=True,
|
| 80 |
+
)
|
| 81 |
+
else:
|
| 82 |
+
gr.Markdown("No data available for validation split.")
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
show_validation_js = """
|
| 86 |
+
() => {
|
| 87 |
+
document.getElementById('validation_nav_container').style.display = 'block';
|
| 88 |
+
document.getElementById('test_nav_container').style.display = 'none';
|
| 89 |
+
setTimeout(() => { window.dispatchEvent(new Event('resize')) }, 0);
|
| 90 |
+
}
|
| 91 |
+
"""
|
| 92 |
+
|
| 93 |
+
# JavaScript to show the TEST nav, hide the VALIDATION nav, AND fix the plots.
|
| 94 |
+
show_test_js = """
|
| 95 |
+
() => {
|
| 96 |
+
document.getElementById('validation_nav_container').style.display = 'none';
|
| 97 |
+
document.getElementById('test_nav_container').style.display = 'block';
|
| 98 |
+
}
|
| 99 |
+
"""
|
| 100 |
+
|
| 101 |
+
# Assign the pure JS functions to the select events. No Python `fn` is needed.
|
| 102 |
+
validation_tab.select(fn=None, inputs=None, outputs=None, js=show_validation_js)
|
| 103 |
+
test_tab.select(fn=None, inputs=None, outputs=None, js=show_test_js)
|
| 104 |
+
|
| 105 |
+
return validation_nav_container, test_nav_container
|
config.py
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
|
| 3 |
+
LOCAL_DEBUG = not (os.environ.get("system") == "spaces")
|
| 4 |
+
CONFIG_NAME = os.getenv("HF_CONFIG", "1.0.0-dev1") # This corresponds to 'config' in LeaderboardViewer
|
| 5 |
+
IS_INTERNAL = os.environ.get("IS_INTERNAL", "false").lower() == "true"
|
| 6 |
+
|
| 7 |
+
# OpenHands Index datasets
|
| 8 |
+
CONTACT_DATASET = f"OpenHands/openhands-index-contact-info"
|
| 9 |
+
|
| 10 |
+
if IS_INTERNAL:
|
| 11 |
+
# datasets backing the internal leaderboard
|
| 12 |
+
SUBMISSION_DATASET = f"OpenHands/openhands-index-internal-submissions"
|
| 13 |
+
RESULTS_DATASET = f"OpenHands/openhands-index-internal-results"
|
| 14 |
+
LEADERBOARD_PATH = f"OpenHands/openhands-index-internal-leaderboard"
|
| 15 |
+
else:
|
| 16 |
+
# datasets backing the public leaderboard
|
| 17 |
+
SUBMISSION_DATASET = f"OpenHands/openhands-index-submissions"
|
| 18 |
+
RESULTS_DATASET = f"OpenHands/openhands-index-results"
|
| 19 |
+
LEADERBOARD_PATH = f"OpenHands/openhands-index"
|
| 20 |
+
|
| 21 |
+
DATA_DIR = "/tmp/oh_index/data/" + CONFIG_NAME
|
| 22 |
+
EXTRACTED_DATA_DIR = os.path.join(DATA_DIR, "extracted")
|
content.py
ADDED
|
@@ -0,0 +1,934 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
|
| 3 |
+
def create_gradio_anchor_id(text: str, validation) -> str:
|
| 4 |
+
"""
|
| 5 |
+
Replicates the ID format created by gr.Markdown(header_links=True).
|
| 6 |
+
Example: "Paper Finder Validation" -> "h-paper-finder-validation"
|
| 7 |
+
"""
|
| 8 |
+
text = text.lower()
|
| 9 |
+
text = re.sub(r'\s+', '-', text) # Replace spaces with hyphens
|
| 10 |
+
text = re.sub(r'[^\w-]', '', text) # Remove non-word characters
|
| 11 |
+
if validation:
|
| 12 |
+
return f"h-{text}-leaderboard-1"
|
| 13 |
+
return f"h-{text}-leaderboard"
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
TITLE = """<h1 align="left" id="space-title">AstaBench Leaderboard</h1>"""
|
| 17 |
+
|
| 18 |
+
INTRO_PARAGRAPH = """
|
| 19 |
+
<p>
|
| 20 |
+
<strong>AstaBench</strong> provides an aggregated view of agent performance and efficiency across all benchmarks in all four categories. We report:
|
| 21 |
+
</p>
|
| 22 |
+
|
| 23 |
+
<ul class="info-list">
|
| 24 |
+
<li>
|
| 25 |
+
<strong>Overall score:</strong> A macro-average of the four category-level average scores. Each category contributes equally, regardless of how many benchmarks it includes. This ensures fair comparisons across agents with different domain strengths.
|
| 26 |
+
</li>
|
| 27 |
+
<li>
|
| 28 |
+
<strong>Overall cost:</strong> A macro-average of the agent’s cost per problem across all categories, in USD. Each category contributes equally.
|
| 29 |
+
</li>
|
| 30 |
+
</ul>
|
| 31 |
+
|
| 32 |
+
<p>
|
| 33 |
+
This view is designed for quick comparison of general-purpose scientific agents. For more details on how we calculate scores and cost, please see the <a href="/about" style="color: #0FCB8C; text-decoration: underline;">About</a> Page.
|
| 34 |
+
</p>
|
| 35 |
+
"""
|
| 36 |
+
SCATTER_DISCLAIMER = """
|
| 37 |
+
**Note:** Agents without cost data are displayed to the right of the vertical divider line.
|
| 38 |
+
"""
|
| 39 |
+
PARETO_DISCLAIMER = """
|
| 40 |
+
Agents names that are green are Pareto optimal, meaning they achieve the best performance for their cost.
|
| 41 |
+
"""
|
| 42 |
+
LIT_DESCRIPTION = """
|
| 43 |
+
The **Literature Understanding** category evaluates how well agents comprehend and interact with scientific literature—testing their ability to find research papers, assess citation quality, extract information from text, and more.
|
| 44 |
+
<br><br>
|
| 45 |
+
The scores shown below reflect performance aggregated across five distinct benchmarks, each targeting a different aspect of literature-based reasoning.
|
| 46 |
+
<br><br>
|
| 47 |
+
For detailed results, use the links above to explore individual benchmarks.
|
| 48 |
+
<br>
|
| 49 |
+
"""
|
| 50 |
+
CODE_EXECUTION_DESCRIPTION = """
|
| 51 |
+
The **Code & Execution** category in AstaBench includes tasks that evaluate an agent’s ability to write, modify, and run code in realistic research scenarios. Unlike literature tasks—which only require read-only tools and can sometimes even be solved by a language model alone—these problems often require the agent to manipulate a machine environment with tools: reading input files, executing code, and writing outputs to specific files in the required format.
|
| 52 |
+
<br><br>
|
| 53 |
+
The scores in this category are aggregated from three distinct benchmarks, each targeting different facets of scientific coding and execution. Together, these benchmarks evaluate whether an agent can function as a hands-on scientific assistant—not just by reasoning about code, but by running it in real-world contexts.
|
| 54 |
+
<br><br>
|
| 55 |
+
For detailed results, use the links above to explore individual benchmark pages.
|
| 56 |
+
<br>
|
| 57 |
+
"""
|
| 58 |
+
DATA_ANALYSIS_DESCRIPTION = """
|
| 59 |
+
The **Data Analysis** category evaluates agents on their ability to analyze structured datasets and generate meaningful scientific hypotheses. It currently includes a single benchmark, DiscoveryBench, so the category-level scores are the same as the benchmark-level results.
|
| 60 |
+
<br><br>
|
| 61 |
+
As additional benchmarks are added in the future, this category will expand to cover a broader range of data-driven reasoning tasks across scientific domains.
|
| 62 |
+
<br>
|
| 63 |
+
"""
|
| 64 |
+
DISCOVERY_DESCRIPTION = """
|
| 65 |
+
The **End-to-End Discovery** category tests whether agents can carry out a complete scientific workflow, from task description to experiment design, code execution, results analysis, and report writing. These tasks require agents to integrate multiple capabilities, producing not just answers but full research artifacts.
|
| 66 |
+
<br><br>
|
| 67 |
+
Scores in this category are aggregated from two benchmarks, providing the first standardized way to evaluate automated scientific discovery (ASD) agents across all stages of the research process. Use the links above to explore individual benchmark pages.
|
| 68 |
+
<br>
|
| 69 |
+
"""
|
| 70 |
+
SUBMISSION_CONFIRMATION = """
|
| 71 |
+
**Your agent has been submitted to AstaBench for evaluation.**
|
| 72 |
+
<br><br>
|
| 73 |
+
🙏 Thanks for contributing!
|
| 74 |
+
<br><br>
|
| 75 |
+
You'll receive a confirmation email from our team within 2 business days with next steps. We will reach out to you directly if further information is needed.
|
| 76 |
+
<br><br>
|
| 77 |
+
We appreciate your support in advancing scientific AI.
|
| 78 |
+
"""
|
| 79 |
+
|
| 80 |
+
# External URLs for benchmark descriptions
|
| 81 |
+
SCHOLAR_QA_CS_URL = "https://www.semanticscholar.org/paper/OpenScholar%3A-Synthesizing-Scientific-Literature-LMs-Asai-He/b40df4b273f255b3cb5639e220c8ab7b1bdb313e"
|
| 82 |
+
LITQA2_URL = "https://www.semanticscholar.org/paper/Language-agents-achieve-superhuman-synthesis-of-Skarlinski-Cox/fa5f9aa1cb6f97654ca8e6d279ceee1427a87e68"
|
| 83 |
+
ARXIV_DIGESTABLES_URL = "https://www.semanticscholar.org/paper/ArxivDIGESTables%3A-Synthesizing-Scientific-into-Newman-Lee/c7face35e84f2cb04fb1600d54298799aa0ed189"
|
| 84 |
+
SUPER_URL = "https://www.semanticscholar.org/paper/SUPER%3A-Evaluating-Agents-on-Setting-Up-and-Tasks-Bogin-Yang/053ef8299988680d47df36224bfccffc817472f1"
|
| 85 |
+
CORE_BENCH_URL = "https://www.semanticscholar.org/paper/CORE-Bench%3A-Fostering-the-Credibility-of-Published-Siegel-Kapoor/4c913d59d150fe7581386b87dfd9f90448a9adee"
|
| 86 |
+
DS1000_URL = "https://arxiv.org/abs/2211.11501"
|
| 87 |
+
DISCOVERY_BENCH_URL = "https://www.semanticscholar.org/paper/DiscoveryBench%3A-Towards-Data-Driven-Discovery-with-Majumder-Surana/48c83799530dc523ee01e6c1c40ad577d5c10a16"
|
| 88 |
+
|
| 89 |
+
# Helper function to create external links
|
| 90 |
+
def external_link(url, text, is_s2_url=False):
|
| 91 |
+
url = f"{url}?utm_source=asta_leaderboard" if is_s2_url else url
|
| 92 |
+
return f"<a href='{url}' target='_blank' rel='noopener noreferrer'>{text}</a>"
|
| 93 |
+
|
| 94 |
+
def internal_leaderboard_link(text, validation):
|
| 95 |
+
anchor_id = create_gradio_anchor_id(text, validation)
|
| 96 |
+
return f"<a href='#{anchor_id}'>{text}</a>"
|
| 97 |
+
|
| 98 |
+
# Function to get benchmark descriptions with validation flag
|
| 99 |
+
def get_benchmark_description(benchmark_name, validation):
|
| 100 |
+
descriptions = {
|
| 101 |
+
'PaperFindingBench': (
|
| 102 |
+
"PaperFindingBench assesses an agent's ability to locate sets of papers based on a natural language "
|
| 103 |
+
"description that may involve both the papers' content and metadata, such as the author or publication year."
|
| 104 |
+
),
|
| 105 |
+
'LitQA2-FullText-Search': (
|
| 106 |
+
f"A version of {internal_leaderboard_link('LitQA2-FullText', validation)} that isolates the retrieval aspect of the task. "
|
| 107 |
+
f"This benchmark features the same multi-choice questions as {internal_leaderboard_link('LitQA2-FullText', validation)}, but the agent is not evaluated on answering the actual question "
|
| 108 |
+
"but rather on providing a ranked list of papers in which the answer is likely to be found."
|
| 109 |
+
),
|
| 110 |
+
'ScholarQA-CS2': (
|
| 111 |
+
"ScholarQA-CS2 assesses long-form model responses to literature review questions in the domain of computer science. "
|
| 112 |
+
"Answers are expected to be comprehensive reports, such as those produced by deep research systems. "
|
| 113 |
+
f"This benchmark advances on the previously released {external_link(SCHOLAR_QA_CS_URL, 'ScholarQA-CS', is_s2_url=True)} "
|
| 114 |
+
"by using queries from real-world usage, and introducing new evaluation methods for coverage and precision "
|
| 115 |
+
"of both the report text and its citations."
|
| 116 |
+
),
|
| 117 |
+
'LitQA2-FullText': (
|
| 118 |
+
f"{external_link(LITQA2_URL, 'LitQA2', is_s2_url=True)}, a benchmark introduced by FutureHouse, gauges a model's ability to answer questions that require document retrieval from the scientific literature. "
|
| 119 |
+
"It consists of multiple-choice questions that necessitate finding a unique paper and analyzing its detailed full text to spot precise information; these questions cannot be answered from a paper’s abstract. "
|
| 120 |
+
"While the original version of the benchmark provided for each question the title of the paper in which the answer can be found, it did not specify the overall collection to search over. In our version, "
|
| 121 |
+
"we search over the index we provide as part of the Asta standard toolset. The “-FullText” suffix indicates we consider only the subset of LitQA2 questions for which "
|
| 122 |
+
"the full-text version of the answering paper is open source and available in our index."
|
| 123 |
+
),
|
| 124 |
+
'ArxivDIGESTables-Clean': (
|
| 125 |
+
f"{external_link(ARXIV_DIGESTABLES_URL, 'ArxivDIGESTables', is_s2_url=True)} assesses the ability of models to construct literature review tables, i.e., tables whose rows are papers and whose columns constitute a set of "
|
| 126 |
+
"aspects used to compare and contrast the papers. The goal is to construct such tables given a set of related papers and a table caption describing the user's goal. Generated tables are evaluated by "
|
| 127 |
+
"comparing them to actual tables published in ArXiv papers. The “-Clean” suffix indicates a curated subset of ArxivDIGESTables which drops tables that are either trivial or impossible to reconstruct from full-texts."
|
| 128 |
+
),
|
| 129 |
+
'SUPER-Expert': (
|
| 130 |
+
"SUPER-Expert evaluates the capability of models in setting up and executing tasks from low-resource "
|
| 131 |
+
"research repositories—centralized databases containing research data and related materials. "
|
| 132 |
+
f"The \"-Expert\" split indicates the name of the most challenging split in the {external_link(SUPER_URL, 'original SUPER benchmark', is_s2_url=True)} "
|
| 133 |
+
"that involves solving reproduction tasks from scratch and without any intermediate hints or details "
|
| 134 |
+
"about the important landmarks involved in each task."
|
| 135 |
+
),
|
| 136 |
+
'CORE-Bench-Hard': (
|
| 137 |
+
"Core-Bench-Hard tests computational reproducibility, a task involving reproducing the results of a study "
|
| 138 |
+
"using provided code and data. It consists of both language-only and vision-language challenges across "
|
| 139 |
+
"multiple difficulty levels. "
|
| 140 |
+
f"The \"-Hard\" split refers to the name of the most challenging split in the original {external_link(CORE_BENCH_URL, 'Core-bench benchmark', is_s2_url=True)} "
|
| 141 |
+
"where only a README file is provided with no instructions or an auxiliary Dockerfile."
|
| 142 |
+
),
|
| 143 |
+
'DS-1000': (
|
| 144 |
+
"DS-1000 is an established code generation benchmark containing Python data science coding questions "
|
| 145 |
+
"originally sourced from StackOverflow. It's designed to reflect an array of diverse, realistic, and "
|
| 146 |
+
"practical use cases and directly involves many of the Python libraries commonly used in data science "
|
| 147 |
+
f"and machine learning research. We split the original {external_link(DS1000_URL, 'dataset')} "
|
| 148 |
+
"into 100 validation and 900 test problems."
|
| 149 |
+
),
|
| 150 |
+
'DiscoveryBench': (
|
| 151 |
+
"DiscoveryBench is the first comprehensive benchmark to formalize the multi-step process of data-driven "
|
| 152 |
+
"analysis and discovery (i.e., data loading, transformation, statistical analysis, and modeling). "
|
| 153 |
+
f"Originally introduced {external_link(DISCOVERY_BENCH_URL, 'here', is_s2_url=True)}, it is designed to systematically "
|
| 154 |
+
"evaluate how well current LLMs can replicate or reproduce published scientific findings across diverse "
|
| 155 |
+
"domains, including social science, biology, history, and more."
|
| 156 |
+
),
|
| 157 |
+
'E2E-Bench': (
|
| 158 |
+
"E2E-Bench is the \"decathlon\" of AI-assisted research. It measures whether a system can run the entire "
|
| 159 |
+
"research pipeline, starting with an initial task description, to designing and performing (software) "
|
| 160 |
+
"experiments, to analyzing and writing up the results."
|
| 161 |
+
),
|
| 162 |
+
'E2E-Bench-Hard': (
|
| 163 |
+
f"E2E-Bench-Hard is a more challenging variant of {internal_leaderboard_link('E2E-Bench', validation)}. Tasks are generated using the HypER system, "
|
| 164 |
+
"which identifies research trends and proposes new, underexplored problems. Unlike the regular version, "
|
| 165 |
+
"these tasks are not simplified or curated for accessibility; they are reviewed only for feasibility. "
|
| 166 |
+
"This version is intended to test whether systems can handle more complex and less-structured research "
|
| 167 |
+
f"scenarios, following the same end-to-end process as {internal_leaderboard_link('E2E-Bench', validation)}."
|
| 168 |
+
)
|
| 169 |
+
}
|
| 170 |
+
|
| 171 |
+
return descriptions.get(benchmark_name, "")
|
| 172 |
+
|
| 173 |
+
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
|
| 174 |
+
CITATION_BUTTON_TEXT = r"""@article{asta-bench,
|
| 175 |
+
title={AstaBench},
|
| 176 |
+
author={AstaBench folks},
|
| 177 |
+
year={2025},
|
| 178 |
+
eprint={TBD.TBD},
|
| 179 |
+
archivePrefix={arXiv},
|
| 180 |
+
primaryClass={cs.AI},
|
| 181 |
+
secondaryClass={cs.CL}
|
| 182 |
+
}"""
|
| 183 |
+
|
| 184 |
+
LEGAL_DISCLAIMER_TEXT = """
|
| 185 |
+
<h2>Terms and Conditions</h2>
|
| 186 |
+
<p>
|
| 187 |
+
The Allen Institute for Artificial Intelligence (Ai2) maintains this repository for agent evaluation submissions to AstaBench. To keep AstaBench fair and auditable, all evaluation logs and associated submission files will be made publicly available. This includes your benchmark inputs, model output responses, and other data and information related to your submission as needed to verify the results.
|
| 188 |
+
</p>
|
| 189 |
+
<br>
|
| 190 |
+
<p>
|
| 191 |
+
Your submissions to AstaBench will be posted, scored, and ranked on the leaderboard at <a href="https://huggingface.co/spaces/allenai/asta-bench-leaderboard" target="_blank" rel="noopener noreferrer">https://huggingface.co/spaces/allenai/asta-bench-leaderboard</a>. You agree you have the rights to the materials you submit and that you will not share any personal, sensitive, proprietary, or confidential information.
|
| 192 |
+
</p>
|
| 193 |
+
"""
|
| 194 |
+
|
| 195 |
+
def format_error(msg):
|
| 196 |
+
return f"<p style='color: red; font-size: 20px; text-align: center;'>{msg}</p>"
|
| 197 |
+
|
| 198 |
+
|
| 199 |
+
def format_warning(msg):
|
| 200 |
+
return f"<p style='color: orange; font-size: 20px; text-align: center;'>{msg}</p>"
|
| 201 |
+
|
| 202 |
+
|
| 203 |
+
def format_log(msg):
|
| 204 |
+
return f"<p style='color: green; font-size: 20px; text-align: center;'>{msg}</p>"
|
| 205 |
+
|
| 206 |
+
|
| 207 |
+
def hyperlink(link_url: str, text: str = "🔗") -> str:
|
| 208 |
+
if not link_url or not isinstance(link_url, str):
|
| 209 |
+
return str(text) # Or simply "" if link_url is bad
|
| 210 |
+
return f'<a target="_blank" href="{link_url}">{text}</a>'
|
| 211 |
+
|
| 212 |
+
|
| 213 |
+
def hf_uri_to_web_url(uri: str) -> str:
|
| 214 |
+
"""
|
| 215 |
+
Convert a Hugging Face-style URI like:
|
| 216 |
+
hf://datasets/{namespace}/{repo}/{path...}
|
| 217 |
+
into a public web URL:
|
| 218 |
+
https://huggingface.co/datasets/{namespace}/{repo}/tree/main/{path...}
|
| 219 |
+
"""
|
| 220 |
+
prefix = "hf://datasets/"
|
| 221 |
+
if not uri.startswith(prefix):
|
| 222 |
+
raise ValueError("URI must start with 'hf://datasets/'")
|
| 223 |
+
|
| 224 |
+
parts = uri[len(prefix) :].split("/", 2)
|
| 225 |
+
if len(parts) < 3:
|
| 226 |
+
raise ValueError("Expected format: hf://datasets/{namespace}/{repo}/{path...}")
|
| 227 |
+
|
| 228 |
+
namespace, repo, path = parts
|
| 229 |
+
return f"https://huggingface.co/datasets/{namespace}/{repo}/tree/main/{path}"
|
| 230 |
+
|
| 231 |
+
|
| 232 |
+
css = """
|
| 233 |
+
/* CSS Color Variables using Gradio theme */
|
| 234 |
+
:root {
|
| 235 |
+
--color-primary-green: var(--primary-900); /* #0FCB8C */
|
| 236 |
+
--color-primary-pink: var(--secondary-900); /* #f0529c */
|
| 237 |
+
--color-neutral-light: var(--neutral-200); /* #C9C9C3 */
|
| 238 |
+
--color-background-light: var(--neutral-50); /* #FAF2E9 */
|
| 239 |
+
--color-background-dark: var(--neutral-900); /* #032629 */
|
| 240 |
+
--color-text-light: var(--neutral-50); /* #FAF2E9 */
|
| 241 |
+
}
|
| 242 |
+
|
| 243 |
+
/* This makes space for the huggingface header bar which must shown on HF spaces. */
|
| 244 |
+
/* FIXME Media queries don't seem to survive rendering. */
|
| 245 |
+
/* @media (min-width: 768px) { ... } */
|
| 246 |
+
gradio-app {
|
| 247 |
+
padding-top: 65px;
|
| 248 |
+
}
|
| 249 |
+
|
| 250 |
+
/* Global Styles */
|
| 251 |
+
h2 {
|
| 252 |
+
overflow: hidden;
|
| 253 |
+
}
|
| 254 |
+
|
| 255 |
+
#intro-paragraph {
|
| 256 |
+
font-size: 18px;
|
| 257 |
+
max-width: 90%;
|
| 258 |
+
padding-left: 35px;
|
| 259 |
+
margin-top: 20px;
|
| 260 |
+
}
|
| 261 |
+
|
| 262 |
+
#intro-paragraph p,
|
| 263 |
+
#intro-paragraph li {
|
| 264 |
+
font-size: 16px;
|
| 265 |
+
line-height: 1.8;
|
| 266 |
+
}
|
| 267 |
+
|
| 268 |
+
#intro-paragraph ul {
|
| 269 |
+
margin-top: 20px;
|
| 270 |
+
margin-bottom: 20px;
|
| 271 |
+
}
|
| 272 |
+
|
| 273 |
+
#diagram-image {
|
| 274 |
+
height: 100%;
|
| 275 |
+
}
|
| 276 |
+
|
| 277 |
+
#diagram-image img {
|
| 278 |
+
width: 100%;
|
| 279 |
+
height: 100%;
|
| 280 |
+
object-fit: cover;
|
| 281 |
+
}
|
| 282 |
+
#intro-category-paragraph {
|
| 283 |
+
font-size: 18px;
|
| 284 |
+
max-width: 90%;
|
| 285 |
+
margin-top: 20px;
|
| 286 |
+
}
|
| 287 |
+
|
| 288 |
+
#intro-category-paragraph p,
|
| 289 |
+
#intro-category-paragraph li {
|
| 290 |
+
font-size: 16px;
|
| 291 |
+
line-height: 1.8;
|
| 292 |
+
}
|
| 293 |
+
|
| 294 |
+
#intro-category-paragraph ul {
|
| 295 |
+
margin-top: 20px;
|
| 296 |
+
margin-bottom: 20px;
|
| 297 |
+
}
|
| 298 |
+
|
| 299 |
+
#about-content {
|
| 300 |
+
font-size: 18px;
|
| 301 |
+
max-width: 60%;
|
| 302 |
+
padding-left: 25px;
|
| 303 |
+
}
|
| 304 |
+
#category-intro {
|
| 305 |
+
font-size: 18px;
|
| 306 |
+
max-width: 60%;
|
| 307 |
+
}
|
| 308 |
+
#logo-image {
|
| 309 |
+
margin: 0;
|
| 310 |
+
margin-bottom: 30px;
|
| 311 |
+
justify-content: flex-start;
|
| 312 |
+
max-width: 250px;
|
| 313 |
+
height: auto;
|
| 314 |
+
}
|
| 315 |
+
#page-content-wrapper{
|
| 316 |
+
padding-left: 25px;
|
| 317 |
+
}
|
| 318 |
+
.table-component{
|
| 319 |
+
height: auto !important;
|
| 320 |
+
max-height: none !important;
|
| 321 |
+
}
|
| 322 |
+
.table-wrap {
|
| 323 |
+
max-height: none !important;
|
| 324 |
+
height: auto !important;
|
| 325 |
+
overflow-y: visible !important;
|
| 326 |
+
}
|
| 327 |
+
/* --- New Rules for Table Density --- */
|
| 328 |
+
table.gr-table th, table.gr-table td {
|
| 329 |
+
padding: 4px 4px !important;
|
| 330 |
+
width: 1%;
|
| 331 |
+
white-space: nowrap;
|
| 332 |
+
}
|
| 333 |
+
table.svelte-1e98i6s td {
|
| 334 |
+
vertical-align: top !important;
|
| 335 |
+
}
|
| 336 |
+
table.gr-table {
|
| 337 |
+
font-size: 14px !important;
|
| 338 |
+
}
|
| 339 |
+
.html-container {
|
| 340 |
+
padding-top: 0 !important;
|
| 341 |
+
}
|
| 342 |
+
#scatter-disclaimer {
|
| 343 |
+
overflow: visible !important;
|
| 344 |
+
}
|
| 345 |
+
#pareto-disclaimer {
|
| 346 |
+
color: #f0529c !important;
|
| 347 |
+
}
|
| 348 |
+
thead.svelte-1e98i6s th {
|
| 349 |
+
background: white !important;
|
| 350 |
+
}
|
| 351 |
+
.dark thead.svelte-1e98i6s th {
|
| 352 |
+
background: #091a1a !important;
|
| 353 |
+
}
|
| 354 |
+
.cell-wrap.svelte-v1pjjd {
|
| 355 |
+
font-family: 'Manrope';
|
| 356 |
+
}
|
| 357 |
+
nav.svelte-ti537g.svelte-ti537g {
|
| 358 |
+
justify-content: flex-start;
|
| 359 |
+
}
|
| 360 |
+
.nav-holder {
|
| 361 |
+
padding-left: 20px !important;
|
| 362 |
+
}
|
| 363 |
+
#legend-markdown span {
|
| 364 |
+
margin-right: 15px !important;
|
| 365 |
+
}
|
| 366 |
+
#leaderboard-accordion .label-wrap {
|
| 367 |
+
font-size: 1.4rem !important;
|
| 368 |
+
z-index: 10 !important;
|
| 369 |
+
position: relative !important;
|
| 370 |
+
}
|
| 371 |
+
.dark #leaderboard-accordion .label-wrap {
|
| 372 |
+
color: #0FCB8C !important;
|
| 373 |
+
}
|
| 374 |
+
.dark block.svelte-1svsvh2 {
|
| 375 |
+
background: #032629 !important;
|
| 376 |
+
}
|
| 377 |
+
.padding.svelte-phx28p {
|
| 378 |
+
padding: 0 !important;
|
| 379 |
+
}
|
| 380 |
+
.sub-nav-bar-container {
|
| 381 |
+
display: flex !important;
|
| 382 |
+
flex-wrap: wrap !important;
|
| 383 |
+
align-items: center !important;
|
| 384 |
+
gap: 10px !important;
|
| 385 |
+
}
|
| 386 |
+
.dark .primary-link-button {
|
| 387 |
+
color: var(--color-primary-green);
|
| 388 |
+
}
|
| 389 |
+
.primary-link-button {
|
| 390 |
+
background: none;
|
| 391 |
+
border: none;
|
| 392 |
+
padding: 0;
|
| 393 |
+
margin: 0;
|
| 394 |
+
font-family: inherit;
|
| 395 |
+
font-size: 16px;
|
| 396 |
+
color: var(--color-primary-pink);
|
| 397 |
+
text-decoration: none;
|
| 398 |
+
cursor: pointer;
|
| 399 |
+
white-space: nowrap;
|
| 400 |
+
}
|
| 401 |
+
.primary-link-button:hover {
|
| 402 |
+
text-decoration: underline;
|
| 403 |
+
}
|
| 404 |
+
.sub-nav-label {
|
| 405 |
+
font-weight: bold;
|
| 406 |
+
font-size: 16px;
|
| 407 |
+
display: flex;
|
| 408 |
+
align-items: center;
|
| 409 |
+
}
|
| 410 |
+
.wrap-header-df th span{
|
| 411 |
+
white-space: normal !important;
|
| 412 |
+
word-break: normal !important;
|
| 413 |
+
overflow-wrap: break-word !important;
|
| 414 |
+
line-height: 1.2 !important;
|
| 415 |
+
vertical-align: top !important;
|
| 416 |
+
font-size: 12px !important;
|
| 417 |
+
font-family: 'Manrope';
|
| 418 |
+
}
|
| 419 |
+
.wrap-header-df th {
|
| 420 |
+
height: auto !important;
|
| 421 |
+
}
|
| 422 |
+
.wrap-header-df .cell-wrap img {
|
| 423 |
+
width: 16px;
|
| 424 |
+
height: 16px;
|
| 425 |
+
vertical-align: middle;
|
| 426 |
+
}
|
| 427 |
+
#legend-markdown img {
|
| 428 |
+
width: 16px;
|
| 429 |
+
height: 16px;
|
| 430 |
+
vertical-align: middle;
|
| 431 |
+
}
|
| 432 |
+
/*------ Global tooltip styles ------*/
|
| 433 |
+
.tooltip-icon {
|
| 434 |
+
display: inline-block;
|
| 435 |
+
cursor: help;
|
| 436 |
+
position: relative;
|
| 437 |
+
}
|
| 438 |
+
.tooltip-icon::after {
|
| 439 |
+
content: attr(data-tooltip);
|
| 440 |
+
position: absolute;
|
| 441 |
+
bottom: 125%;
|
| 442 |
+
background-color: #105257;
|
| 443 |
+
color: #fff;
|
| 444 |
+
padding: 10px;
|
| 445 |
+
border-radius: 4px;
|
| 446 |
+
font-size: 12px;
|
| 447 |
+
opacity: 0;
|
| 448 |
+
transition: opacity 0.2s;
|
| 449 |
+
white-space: pre-line;
|
| 450 |
+
width: max-content;
|
| 451 |
+
text-align: left;
|
| 452 |
+
pointer-events: none;
|
| 453 |
+
max-width: 300px;
|
| 454 |
+
left: 50%;
|
| 455 |
+
transform: translateX(-50%);
|
| 456 |
+
z-index: 1000;
|
| 457 |
+
}
|
| 458 |
+
@media (max-width: 768px) {
|
| 459 |
+
.tooltip-icon::after {
|
| 460 |
+
max-width: 250px;
|
| 461 |
+
}
|
| 462 |
+
}
|
| 463 |
+
.tooltip-icon:hover::after {
|
| 464 |
+
opacity: 1;
|
| 465 |
+
}
|
| 466 |
+
/*------ Openness label tooltip styles ------*/
|
| 467 |
+
.styler,
|
| 468 |
+
#openness-label-html,
|
| 469 |
+
#agent-tooling-label-html {
|
| 470 |
+
overflow: visible !important;
|
| 471 |
+
}
|
| 472 |
+
/*------ Table cell tooltip styles ------*/
|
| 473 |
+
.wrap.default.full,
|
| 474 |
+
span.wrap[tabindex="0"][role="button"][data-editable="false"] {
|
| 475 |
+
overflow: visible !important;
|
| 476 |
+
}
|
| 477 |
+
|
| 478 |
+
.cell-tooltip-icon::after {
|
| 479 |
+
height: fit-content;
|
| 480 |
+
top: 125%;
|
| 481 |
+
}
|
| 482 |
+
/*------ Table column description tooltip styles ------*/
|
| 483 |
+
#legend-markdown,
|
| 484 |
+
#leaderboard-accordion {
|
| 485 |
+
overflow: visible !important;
|
| 486 |
+
}
|
| 487 |
+
|
| 488 |
+
/* --- inside table tooltips --- */
|
| 489 |
+
.native-tooltip-icon {
|
| 490 |
+
cursor: help;
|
| 491 |
+
text-decoration: underline dotted 1px;
|
| 492 |
+
}
|
| 493 |
+
/* Main Nav bar styling */
|
| 494 |
+
.nav-holder nav {
|
| 495 |
+
display: grid !important;
|
| 496 |
+
grid-template-columns: auto auto auto auto auto 1fr auto auto !important;
|
| 497 |
+
gap: 10px 20px !important; /* Vertical and horizontal spacing */
|
| 498 |
+
width: 100% !important;
|
| 499 |
+
align-items: center;
|
| 500 |
+
}
|
| 501 |
+
.nav-holder nav a[href*="about"] {
|
| 502 |
+
grid-row: 1 !important;
|
| 503 |
+
grid-column: 7 !important;
|
| 504 |
+
}
|
| 505 |
+
.nav-holder nav a[href*="submit"] {
|
| 506 |
+
grid-row: 1 !important;
|
| 507 |
+
grid-column: 8 !important;
|
| 508 |
+
white-space: nowrap !important;
|
| 509 |
+
}
|
| 510 |
+
/* Divider line between header and category nav */
|
| 511 |
+
.nav-holder nav::after {
|
| 512 |
+
content: ''; /* Required for pseudo-elements to appear */
|
| 513 |
+
background-color: #C9C9C3;
|
| 514 |
+
height: 1px;
|
| 515 |
+
grid-row: 2 !important;
|
| 516 |
+
grid-column: 1 / -1 !important;
|
| 517 |
+
}
|
| 518 |
+
|
| 519 |
+
/* Horizontal scrolling for navigation */
|
| 520 |
+
.nav-holder nav {
|
| 521 |
+
overflow-x: auto;
|
| 522 |
+
scrollbar-width: none;
|
| 523 |
+
-ms-overflow-style: none;
|
| 524 |
+
}
|
| 525 |
+
.nav-holder nav::-webkit-scrollbar {
|
| 526 |
+
display: none;
|
| 527 |
+
}
|
| 528 |
+
|
| 529 |
+
/* Category navigation buttons in row 3 */
|
| 530 |
+
.nav-holder nav a[href*="literature-understanding"],
|
| 531 |
+
.nav-holder nav a[href*="code-execution"],
|
| 532 |
+
.nav-holder nav a[href*="data-analysis"],
|
| 533 |
+
.nav-holder nav a[href*="discovery"] {
|
| 534 |
+
grid-row: 3 !important;
|
| 535 |
+
justify-self: center !important;
|
| 536 |
+
width: fit-content !important;
|
| 537 |
+
white-space: nowrap;
|
| 538 |
+
flex-shrink: 0;
|
| 539 |
+
}
|
| 540 |
+
|
| 541 |
+
.nav-holder nav a[href*="literature-understanding"] { grid-column: 1 !important; }
|
| 542 |
+
.nav-holder nav a[href*="code-execution"] { grid-column: 2 !important; }
|
| 543 |
+
.nav-holder nav a[href*="data-analysis"] { grid-column: 3 !important; }
|
| 544 |
+
.nav-holder nav a[href*="discovery"] { grid-column: 4 !important; }
|
| 545 |
+
|
| 546 |
+
/* Navigation hover styles */
|
| 547 |
+
.nav-holder nav a[href*="about"]:hover,
|
| 548 |
+
.nav-holder nav a[href*="submit"]:hover,
|
| 549 |
+
.nav-holder nav a[href*="literature-understanding"]:hover,
|
| 550 |
+
.nav-holder nav a[href*="code-execution"]:hover,
|
| 551 |
+
.nav-holder nav a[href*="data-analysis"]:hover,
|
| 552 |
+
.nav-holder nav a[href*="discovery"]:hover {
|
| 553 |
+
background-color: #FDF9F4;
|
| 554 |
+
}
|
| 555 |
+
|
| 556 |
+
.dark .nav-holder nav a[href*="about"]:hover,
|
| 557 |
+
.dark .nav-holder nav a[href*="submit"]:hover,
|
| 558 |
+
.dark .nav-holder nav a[href*="literature-understanding"]:hover,
|
| 559 |
+
.dark .nav-holder nav a[href*="code-execution"]:hover,
|
| 560 |
+
.dark .nav-holder nav a[href*="data-analysis"]:hover,
|
| 561 |
+
.dark .nav-holder nav a[href*="discovery"]:hover {
|
| 562 |
+
background-color: #1C3A3C;
|
| 563 |
+
}
|
| 564 |
+
.benchmark-main-subtitle{
|
| 565 |
+
color: var(--color-primary-green);
|
| 566 |
+
overflow: hidden;
|
| 567 |
+
padding-top: 120px;
|
| 568 |
+
}
|
| 569 |
+
.benchmark-title{
|
| 570 |
+
color: var(--color-primary-pink);
|
| 571 |
+
margin-top: 50px;
|
| 572 |
+
font-size: 20px;
|
| 573 |
+
}
|
| 574 |
+
.dark .benchmark-title{
|
| 575 |
+
color: var(--color-primary-green);
|
| 576 |
+
}
|
| 577 |
+
.benchmark-description {
|
| 578 |
+
margin: 20px 0;
|
| 579 |
+
max-width: 800px;
|
| 580 |
+
}
|
| 581 |
+
/*------ Submission Page CSS ------*/
|
| 582 |
+
#submission-modal .modal-container,
|
| 583 |
+
#success-modal .modal-container {
|
| 584 |
+
height: auto;
|
| 585 |
+
max-width: 600px;
|
| 586 |
+
}
|
| 587 |
+
|
| 588 |
+
#submission-modal-content,
|
| 589 |
+
#success-modal .submission-modal-content {
|
| 590 |
+
padding: 20px;
|
| 591 |
+
background-color: inherit;
|
| 592 |
+
border-radius: 8px;
|
| 593 |
+
text-align: center;
|
| 594 |
+
}
|
| 595 |
+
|
| 596 |
+
#submission-modal-content p,
|
| 597 |
+
#success-modal .submission-modal-content p {
|
| 598 |
+
font-size: 16px;
|
| 599 |
+
}
|
| 600 |
+
|
| 601 |
+
#legal-modal-content {
|
| 602 |
+
padding: 30px;
|
| 603 |
+
background-color: inherit;
|
| 604 |
+
border-radius: 8px;
|
| 605 |
+
text-align: left;
|
| 606 |
+
font-size: 14px;
|
| 607 |
+
}
|
| 608 |
+
|
| 609 |
+
#legal-modal-content h2 {
|
| 610 |
+
text-align: center;
|
| 611 |
+
}
|
| 612 |
+
#legal-modal-content button {
|
| 613 |
+
width: fit-content;
|
| 614 |
+
}
|
| 615 |
+
.spinner-container {
|
| 616 |
+
display: flex;
|
| 617 |
+
flex-direction: column;
|
| 618 |
+
align-items: center;
|
| 619 |
+
justify-content: center;
|
| 620 |
+
padding: 30px;
|
| 621 |
+
}
|
| 622 |
+
|
| 623 |
+
.spinner {
|
| 624 |
+
width: 50px;
|
| 625 |
+
height: 50px;
|
| 626 |
+
border: 5px solid #dee2e6;
|
| 627 |
+
border-top: 5px solid #007bff;
|
| 628 |
+
border-radius: 50%;
|
| 629 |
+
animation: spin 1s linear infinite;
|
| 630 |
+
margin-bottom: 20px;
|
| 631 |
+
}
|
| 632 |
+
|
| 633 |
+
@keyframes spin {
|
| 634 |
+
0% { transform: rotate(0deg); }
|
| 635 |
+
100% { transform: rotate(360deg); }
|
| 636 |
+
}
|
| 637 |
+
|
| 638 |
+
#submission-page-container {
|
| 639 |
+
max-width: 800px;
|
| 640 |
+
margin: 0 auto;
|
| 641 |
+
}
|
| 642 |
+
|
| 643 |
+
#submission-file-label {
|
| 644 |
+
padding: 10px;
|
| 645 |
+
}
|
| 646 |
+
|
| 647 |
+
#submission-button {
|
| 648 |
+
max-width: fit-content;
|
| 649 |
+
font-size: 14px;
|
| 650 |
+
}
|
| 651 |
+
|
| 652 |
+
.custom-form-group {
|
| 653 |
+
border: 1px solid #000 !important;
|
| 654 |
+
border-radius: 4px !important;
|
| 655 |
+
padding: 24px !important;
|
| 656 |
+
overflow: visible !important;
|
| 657 |
+
}
|
| 658 |
+
|
| 659 |
+
#openness-label-html,
|
| 660 |
+
#agent-tooling-label-html,
|
| 661 |
+
#agent-info-label-html,
|
| 662 |
+
#submitter-info-label-html,
|
| 663 |
+
#username-label-html,
|
| 664 |
+
#email-label-html,
|
| 665 |
+
#role-label-html {
|
| 666 |
+
padding-left: 12px;
|
| 667 |
+
}
|
| 668 |
+
|
| 669 |
+
.form-label {
|
| 670 |
+
margin: 4px 0px 0px 6px;
|
| 671 |
+
}
|
| 672 |
+
|
| 673 |
+
.form-label-fieldset {
|
| 674 |
+
padding-top: 10px !important;
|
| 675 |
+
}
|
| 676 |
+
|
| 677 |
+
#agent-tooling-label-html {
|
| 678 |
+
padding-top: 6px;
|
| 679 |
+
}
|
| 680 |
+
|
| 681 |
+
.custom-form-group,
|
| 682 |
+
.styler {
|
| 683 |
+
background: none;
|
| 684 |
+
}
|
| 685 |
+
|
| 686 |
+
#feedback-button {
|
| 687 |
+
display: inline-block;
|
| 688 |
+
background-color: #345d60;
|
| 689 |
+
color: white;
|
| 690 |
+
border: none;
|
| 691 |
+
border-radius: 4px;
|
| 692 |
+
padding: 15px 20px;
|
| 693 |
+
font-size: 16px;
|
| 694 |
+
cursor: pointer;
|
| 695 |
+
transition: all 0.3s ease;
|
| 696 |
+
text-decoration: none;
|
| 697 |
+
}
|
| 698 |
+
|
| 699 |
+
#feedback-button:hover {
|
| 700 |
+
background-color: #5d888b;
|
| 701 |
+
transform: translateY(-2px);
|
| 702 |
+
box-shadow: 0 6px 12px rgba(0,0,0,0.3);
|
| 703 |
+
}
|
| 704 |
+
.dark #main-header h2 {
|
| 705 |
+
color: #0fcb8c;
|
| 706 |
+
}
|
| 707 |
+
#main-header h2 {
|
| 708 |
+
color: #f0529c;
|
| 709 |
+
}
|
| 710 |
+
|
| 711 |
+
/* --- New HTML-Based Tooltip Styles --- */
|
| 712 |
+
.tooltip-icon-legend {
|
| 713 |
+
position: relative;
|
| 714 |
+
cursor: help;
|
| 715 |
+
display: inline-block;
|
| 716 |
+
}
|
| 717 |
+
|
| 718 |
+
/* The HTML pop-up card tooltips.*/
|
| 719 |
+
.tooltip-card {
|
| 720 |
+
/* Hiding mechanism */
|
| 721 |
+
opacity: 0;
|
| 722 |
+
visibility: hidden;
|
| 723 |
+
transition: opacity 0.2s;
|
| 724 |
+
pointer-events: none;
|
| 725 |
+
/* Card appearance */
|
| 726 |
+
position: fixed;
|
| 727 |
+
z-index: 1000;
|
| 728 |
+
background-color: #083c40;
|
| 729 |
+
color: #e5e7eb;
|
| 730 |
+
border-radius: 12px;
|
| 731 |
+
padding: 15px;
|
| 732 |
+
width: max-content;
|
| 733 |
+
max-width: 400px;
|
| 734 |
+
text-align: left;
|
| 735 |
+
}
|
| 736 |
+
.tooltip-card.visible {
|
| 737 |
+
opacity: 1;
|
| 738 |
+
visibility: visible;
|
| 739 |
+
}
|
| 740 |
+
.tooltip-card h3 {
|
| 741 |
+
font-size: 18px;
|
| 742 |
+
color: #fff;
|
| 743 |
+
margin-top: 0;
|
| 744 |
+
margin-bottom: 12px;
|
| 745 |
+
}
|
| 746 |
+
.tooltip-card .tooltip-description {
|
| 747 |
+
margin-bottom: 20px;
|
| 748 |
+
line-height: 1.3;
|
| 749 |
+
}
|
| 750 |
+
.tooltip-card .tooltip-items-container {
|
| 751 |
+
display: flex;
|
| 752 |
+
flex-direction: column;
|
| 753 |
+
gap: 10px;
|
| 754 |
+
}
|
| 755 |
+
.tooltip-card .tooltip-legend-item {
|
| 756 |
+
display: flex;
|
| 757 |
+
align-items:
|
| 758 |
+
flex-start;
|
| 759 |
+
gap: 10px;
|
| 760 |
+
}
|
| 761 |
+
.tooltip-card .tooltip-legend-item img {
|
| 762 |
+
width: 20px;
|
| 763 |
+
height: 20px;
|
| 764 |
+
margin-top: 2px;
|
| 765 |
+
}
|
| 766 |
+
.tooltip-card .tooltip-legend-item div {
|
| 767 |
+
display: flex;
|
| 768 |
+
flex-direction: column;
|
| 769 |
+
}
|
| 770 |
+
.tooltip-card .tooltip-legend-item strong {
|
| 771 |
+
font-weight: 600;
|
| 772 |
+
color: #fff;
|
| 773 |
+
}
|
| 774 |
+
.tooltip-card .tooltip-legend-item span {
|
| 775 |
+
font-size: 13px;
|
| 776 |
+
line-height: 1.3;
|
| 777 |
+
}
|
| 778 |
+
.tooltip-sub-list {
|
| 779 |
+
list-style-type: '• ';
|
| 780 |
+
padding-left: 18px;
|
| 781 |
+
font-size: 13px;
|
| 782 |
+
line-height: 1.3;
|
| 783 |
+
display: flex;
|
| 784 |
+
flex-direction: column;
|
| 785 |
+
}
|
| 786 |
+
.table-legend-item {
|
| 787 |
+
display: flex;
|
| 788 |
+
align-items: center;
|
| 789 |
+
white-space: nowrap;
|
| 790 |
+
margin-top: 8px;
|
| 791 |
+
flex-wrap: wrap;
|
| 792 |
+
}
|
| 793 |
+
|
| 794 |
+
/* About Page CSS */
|
| 795 |
+
#about-page-content-wrapper {
|
| 796 |
+
margin-left: auto;
|
| 797 |
+
margin-right: auto;
|
| 798 |
+
max-width: 800px;
|
| 799 |
+
padding: 0 24px;
|
| 800 |
+
display: flex;
|
| 801 |
+
flex-direction: column;
|
| 802 |
+
gap: 40px;
|
| 803 |
+
margin-top: 40px;
|
| 804 |
+
opacity: 85%;
|
| 805 |
+
margin-bottom: 60px;
|
| 806 |
+
}
|
| 807 |
+
.link-buttons-container {
|
| 808 |
+
display: flex;
|
| 809 |
+
flex-wrap: wrap; /* Allows buttons to stack on very narrow screens */
|
| 810 |
+
gap: 16px;
|
| 811 |
+
margin-top: 16px;
|
| 812 |
+
}
|
| 813 |
+
.link-button {
|
| 814 |
+
display: flex;
|
| 815 |
+
justify-content: space-between;
|
| 816 |
+
align-items: center;
|
| 817 |
+
flex-grow: 1;
|
| 818 |
+
background-color: #083c40;
|
| 819 |
+
padding: 16px 20px;
|
| 820 |
+
font-weight: 600;
|
| 821 |
+
border-radius: 12px;
|
| 822 |
+
text-decoration: none;
|
| 823 |
+
transition: background-color 0.2s ease-in-out;
|
| 824 |
+
}
|
| 825 |
+
.link-button:hover {
|
| 826 |
+
background-color: #0a4c52;
|
| 827 |
+
}
|
| 828 |
+
.external-link-icon {
|
| 829 |
+
font-size: 20px;
|
| 830 |
+
line-height: 1;
|
| 831 |
+
margin-left: 12px;
|
| 832 |
+
}
|
| 833 |
+
|
| 834 |
+
#leaderboard-accordion table {
|
| 835 |
+
width: auto !important;
|
| 836 |
+
margin-right: auto !important;
|
| 837 |
+
}
|
| 838 |
+
.info-list {
|
| 839 |
+
padding-left: 20px;
|
| 840 |
+
}
|
| 841 |
+
|
| 842 |
+
/* Smooth scrolling for the entire page */
|
| 843 |
+
html {
|
| 844 |
+
scroll-behavior: smooth;
|
| 845 |
+
}
|
| 846 |
+
/* Home Page Styling */
|
| 847 |
+
.diagram-placeholder {
|
| 848 |
+
width: 100%;
|
| 849 |
+
height: 100%;
|
| 850 |
+
min-height: 250px;
|
| 851 |
+
display: flex;
|
| 852 |
+
align-items: center;
|
| 853 |
+
justify-content: center;
|
| 854 |
+
background-color: #FAF2E9;
|
| 855 |
+
color: #F0529C;
|
| 856 |
+
border-radius: 8px;
|
| 857 |
+
font-size: 14px;
|
| 858 |
+
text-align: center;
|
| 859 |
+
}
|
| 860 |
+
/* 2. Responsive behavior for smaller screens */
|
| 861 |
+
@media (max-width: 900px) {
|
| 862 |
+
#intro-row {
|
| 863 |
+
flex-direction: column;
|
| 864 |
+
}
|
| 865 |
+
}
|
| 866 |
+
/* Plot legend styles */
|
| 867 |
+
.plot-legend-container {
|
| 868 |
+
min-height: 572px;
|
| 869 |
+
background-color: #fff;
|
| 870 |
+
padding: 24px 32px;
|
| 871 |
+
border: 1px solid black;
|
| 872 |
+
border-radius: 4px;
|
| 873 |
+
}
|
| 874 |
+
|
| 875 |
+
.dark .plot-legend-container {
|
| 876 |
+
background: rgba(250, 242, 233, 0.1);
|
| 877 |
+
border-color: rgb(159, 234, 209);
|
| 878 |
+
}
|
| 879 |
+
|
| 880 |
+
#plot-legend-logo {
|
| 881 |
+
margin-bottom: 24px;
|
| 882 |
+
}
|
| 883 |
+
|
| 884 |
+
#plot-legend-logo img {
|
| 885 |
+
height: 19px;
|
| 886 |
+
}
|
| 887 |
+
|
| 888 |
+
.plot-legend-category-heading {
|
| 889 |
+
font-size: 16px;
|
| 890 |
+
font-weight: 700;
|
| 891 |
+
}
|
| 892 |
+
|
| 893 |
+
.plot-legend-item {
|
| 894 |
+
display: flex;
|
| 895 |
+
margin-top: 8px;
|
| 896 |
+
}
|
| 897 |
+
|
| 898 |
+
|
| 899 |
+
.plot-legend-item-text .description {
|
| 900 |
+
color: #888;
|
| 901 |
+
font-size: 12px;
|
| 902 |
+
}
|
| 903 |
+
|
| 904 |
+
.plot-legend-item-svg {
|
| 905 |
+
margin-top: 3px;
|
| 906 |
+
width: 14px;
|
| 907 |
+
height: 14px;
|
| 908 |
+
margin-right: 8px;
|
| 909 |
+
}
|
| 910 |
+
|
| 911 |
+
.plot-legend-tooling-svg {
|
| 912 |
+
height: 16px;
|
| 913 |
+
width: 16px;
|
| 914 |
+
margin-top: 2px;
|
| 915 |
+
}
|
| 916 |
+
|
| 917 |
+
#plot-legend-item-pareto-svg {
|
| 918 |
+
width: 18px;
|
| 919 |
+
height: 18px;
|
| 920 |
+
margin-right: 2px;
|
| 921 |
+
}
|
| 922 |
+
h3 .header-link-icon {
|
| 923 |
+
font-size: 12px;
|
| 924 |
+
vertical-align: text-top;
|
| 925 |
+
margin-left: 6px;
|
| 926 |
+
text-decoration: none;
|
| 927 |
+
}
|
| 928 |
+
|
| 929 |
+
/* Targets all "overall stats" columns in the main leaderboard for each category */
|
| 930 |
+
#main-leaderboard td:nth-child(6) .prose,
|
| 931 |
+
#main-leaderboard td:nth-child(7) .prose {
|
| 932 |
+
font-weight: 700 !important;
|
| 933 |
+
}
|
| 934 |
+
"""
|