davanstrien HF Staff Claude Opus 4.6 (1M context) commited on
Commit
800963f
·
1 Parent(s): 2f39cf8

Add AGENTS.md for coding agent discovery

Browse files

Agent-facing doc that points at live sources of truth (CLI --help,
hf jobs hardware, model card eval results API) rather than duplicating
info that goes stale. Covers script selection, dataset vs bucket I/O
patterns, common flags, and gotchas.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

Files changed (1) hide show
  1. AGENTS.md +92 -0
AGENTS.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # For coding agents
2
+
3
+ This repo is a curated collection of ready-to-run OCR scripts — each one self-contained
4
+ via UV inline metadata, runnable over the network via `hf jobs uv run`. No clone, no
5
+ install, no setup.
6
+
7
+ ## Don't rely on this doc — discover the current state
8
+
9
+ This file will go stale. Prefer these sources of truth:
10
+
11
+ - `hf jobs uv run --help` — job submission flags (volumes, secrets, flavors, timeouts)
12
+ - `hf jobs hardware` — current GPU flavors and pricing
13
+ - `hf auth whoami` — check HF token is set
14
+ - `hf jobs ps` / `hf jobs logs <id>` — monitor running jobs
15
+ - `ls` the repo to see which scripts actually exist (bucket variants especially)
16
+ - [README.md](./README.md) — the table of scripts with model sizes and notes
17
+
18
+ ## Picking a script
19
+
20
+ The [README.md](./README.md) table lists every script with model size, backend, and
21
+ a short note. Axes that matter:
22
+
23
+ - **Model size** vs accuracy vs GPU cost. Smaller = cheaper per doc.
24
+ - **Backend**: vLLM scripts are usually fastest at scale. `transformers` and
25
+ `falcon-perception` are alternatives for specific models.
26
+ - **Task support**: most scripts do plain text; some expose `--task-mode`
27
+ (table, formula, layout, etc.) — check the script's own docstring.
28
+
29
+ For the authoritative benchmark numbers on any model in the table, query the model
30
+ card programmatically — every OCR model publishes eval results on its card:
31
+
32
+ from huggingface_hub import HfApi
33
+ info = HfApi().model_info("tiiuae/Falcon-OCR", expand=["evalResults"])
34
+ for r in info.eval_results:
35
+ print(r.dataset_id, r.value)
36
+
37
+ See the [leaderboard data guide](https://huggingface.co/docs/hub/en/leaderboard-data-guide)
38
+ for the full API. This is more reliable than any markdown table that might drift.
39
+
40
+ ## Getting help from a specific script
41
+
42
+ Each script has a docstring at the top with a description and usage examples. To read it
43
+ without downloading:
44
+
45
+ curl -s https://huggingface.co/datasets/uv-scripts/ocr/raw/main/<script>.py | head -100
46
+
47
+ Or open the URL in a browser. Running `uv run <url> --help` locally may fail if the
48
+ script has GPU-only dependencies — reading the docstring is more reliable.
49
+
50
+ ## The main pattern: dataset → dataset
51
+
52
+ Most scripts take an input HF dataset ID and push results to an output HF dataset ID:
53
+
54
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN \
55
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/<script>.py \
56
+ <input-dataset-id> <output-dataset-id> [--max-samples N] [--shuffle]
57
+
58
+ The script adds a `markdown` column to the input dataset and pushes the merged result
59
+ to the output dataset ID on the Hub.
60
+
61
+ ## Alternative: directory → directory (bucket variants)
62
+
63
+ A couple of scripts have `-bucket.py` variants (currently `falcon-ocr-bucket.py` and
64
+ `glm-ocr-bucket.py`) that read from a mounted directory and write one `.md` per image
65
+ (or per PDF page). Useful with HF Buckets via `-v`:
66
+
67
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN \
68
+ -v hf://buckets/<user>/<input>:/input:ro \
69
+ -v hf://buckets/<user>/<output>:/output \
70
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/<script>-bucket.py \
71
+ /input /output
72
+
73
+ `ls` the repo to check whether a `-bucket.py` variant exists for the model you want
74
+ before assuming it's available.
75
+
76
+ ## Common flags across dataset-mode scripts
77
+
78
+ Most scripts support: `--max-samples`, `--shuffle`, `--seed`, `--split`, `--image-column`,
79
+ `--output-column`, `--private`, `--config`, `--create-pr`, `--verbose`. Read the script's
80
+ docstring for the authoritative list — individual scripts may add model-specific options
81
+ like `--task-mode`.
82
+
83
+ ## Gotchas
84
+
85
+ - **Secrets**: pass `-s HF_TOKEN` to forward the user's token into the job.
86
+ - **GPU required**: all scripts exit if CUDA isn't available. `l4x1` is the cheapest
87
+ GPU flavor and works for models up to ~3B. Check `hf jobs hardware` for current options.
88
+ - **First run is slow**: model download + `torch.compile` / vLLM warmup dominates small
89
+ runs. Cost per doc drops sharply past a few hundred images — test with `--max-samples 10`
90
+ first, then scale.
91
+ - **Don't poll jobs**: jobs run async. Submit once, check status later with
92
+ `hf jobs ps` or `hf jobs logs <id>`.