davanstrien HF Staff commited on
Commit
799d633
·
verified ·
1 Parent(s): 94454af

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +86 -0
  2. extract-entities.py +279 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - uv-script
5
+ - ner
6
+ - zero-shot
7
+ - gliner
8
+ - hf-jobs
9
+ ---
10
+
11
+ # GLiNER UV Scripts
12
+
13
+ Zero-shot named-entity recognition over Hugging Face datasets using [GLiNER](https://github.com/urchade/GLiNER). Pass a list of entity types at runtime — no fine-tuning required.
14
+
15
+ | Script | What it does | Output |
16
+ |---|---|---|
17
+ | `extract-entities.py` | Extract entities from a text column with a custom set of types | New `entities` column (list of `{start, end, text, label, score}`) |
18
+
19
+ ## Quick start
20
+
21
+ Run on any HF dataset with a text column. No setup — `uv` resolves dependencies inline.
22
+
23
+ ```bash
24
+ # Local CPU (small samples)
25
+ uv run extract-entities.py \
26
+ librarian-bots/model_cards_with_metadata \
27
+ yourname/model-cards-entities \
28
+ --text-column card \
29
+ --entity-types Person Organization Dataset Model Framework \
30
+ --max-samples 100
31
+ ```
32
+
33
+ ## On HF Jobs
34
+
35
+ ```bash
36
+ # CPU job — fine for small/medium datasets, free or near-free
37
+ hf jobs uv run --flavor cpu-basic --secrets HF_TOKEN \
38
+ https://huggingface.co/datasets/uv-scripts/gliner/raw/main/extract-entities.py \
39
+ librarian-bots/model_cards_with_metadata \
40
+ yourname/model-cards-entities \
41
+ --text-column card \
42
+ --entity-types Person Organization Dataset Model Framework \
43
+ --max-samples 1000
44
+
45
+ # GPU job — worth it once you're processing >~1000 samples
46
+ hf jobs uv run --flavor t4-small --secrets HF_TOKEN \
47
+ https://huggingface.co/datasets/uv-scripts/gliner/raw/main/extract-entities.py \
48
+ librarian-bots/model_cards_with_metadata \
49
+ yourname/model-cards-entities \
50
+ --text-column card \
51
+ --entity-types Person Organization Dataset Model Framework \
52
+ --device cuda \
53
+ --batch-size 32
54
+ ```
55
+
56
+ ## Recommended entity-type vocabularies
57
+
58
+ GLiNER is open-vocabulary, so any string works. Some starting points:
59
+
60
+ - **General news/web text**: `Person Organization Location Date Event`
61
+ - **ML/AI text (e.g. model cards)**: `Person Organization Dataset Model Framework Metric License`
62
+ - **Legal/policy**: `Person Organization Court Statute Date Jurisdiction`
63
+ - **Biomedical**: `Drug Disease Gene Protein Symptom`
64
+
65
+ Quality drops on very abstract or polysemous types — start simple, iterate.
66
+
67
+ ## Models
68
+
69
+ Default: `urchade/gliner_multi-v2.1` (multilingual, ~600 MB). Override with `--gliner-model`.
70
+
71
+ Other useful checkpoints:
72
+ - `urchade/gliner_small-v2.1` — English, faster
73
+ - `urchade/gliner_large-v2.1` — English, larger / higher quality
74
+ - `knowledgator/gliner-multitask-large-v0.5` — multitask (NER + classification + relation)
75
+
76
+ See the [Knowledgator org](https://huggingface.co/knowledgator) and [urchade's models](https://huggingface.co/urchade) for the full set.
77
+
78
+ ## Pairing with Label Studio
79
+
80
+ Output of this script is a Hugging Face dataset of texts + extracted entities. To put those entities in front of human reviewers, see the `bootstrap-labels` skill (or the workflow it documents): pull this dataset's predictions into a Label Studio project for review, then export a corrected dataset back to the Hub.
81
+
82
+ ## Caveats
83
+
84
+ - GLiNER predictions are **bootstrap labels** — useful as a starting point, not as ground truth. Plan a review pass before downstream training.
85
+ - Texts longer than `--max-text-chars` (default 8000) are truncated. Long-form documents may need chunking + reassembly.
86
+ - Entity types are case-sensitive labels in output. Pass them as you want them to appear.
extract-entities.py ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # /// script
3
+ # requires-python = ">=3.10"
4
+ # dependencies = [
5
+ # "gliner>=0.2.16",
6
+ # "datasets>=3.0",
7
+ # "huggingface-hub",
8
+ # "tqdm",
9
+ # ]
10
+ # ///
11
+ """
12
+ Extract named entities from a text column of a HuggingFace dataset using GLiNER.
13
+
14
+ GLiNER is a zero-shot NER model: you pass a list of entity types at inference
15
+ time (e.g., "Person", "Organization", "Dataset"), no fine-tuning needed.
16
+
17
+ Examples:
18
+ # Local CPU run on a small sample
19
+ uv run extract-entities.py \\
20
+ librarian-bots/model_cards_with_metadata \\
21
+ my-username/model-cards-entities \\
22
+ --text-column card \\
23
+ --entity-types Person Organization Dataset Model Framework \\
24
+ --max-samples 100
25
+
26
+ # On HF Jobs with a cheap GPU
27
+ hf jobs uv run --flavor t4-small \\
28
+ --secrets HF_TOKEN \\
29
+ https://huggingface.co/datasets/uv-scripts/gliner/raw/main/extract-entities.py \\
30
+ librarian-bots/model_cards_with_metadata \\
31
+ my-username/model-cards-entities \\
32
+ --text-column card \\
33
+ --entity-types Person Organization Dataset Model Framework \\
34
+ --max-samples 5000
35
+
36
+ Output schema: original dataset columns + new `entities` column, where each
37
+ row's `entities` is a list of dicts:
38
+ {"start": int, "end": int, "text": str, "label": str, "score": float}
39
+ """
40
+
41
+ import argparse
42
+ import logging
43
+ import os
44
+ import sys
45
+ import time
46
+ from typing import Any, Dict, List
47
+
48
+ import torch
49
+ from datasets import Dataset, Features, Sequence, Value, load_dataset
50
+ from huggingface_hub import DatasetCard
51
+
52
+ logging.basicConfig(
53
+ level=logging.INFO,
54
+ format="%(asctime)s - %(levelname)s - %(message)s",
55
+ datefmt="%H:%M:%S",
56
+ )
57
+ log = logging.getLogger(__name__)
58
+
59
+
60
+ def parse_args():
61
+ p = argparse.ArgumentParser(
62
+ description="Bootstrap NER labels with GLiNER over a HuggingFace dataset",
63
+ formatter_class=argparse.RawDescriptionHelpFormatter,
64
+ epilog=__doc__,
65
+ )
66
+ p.add_argument("input_dataset", help="HF dataset ID (e.g., 'org/dataset')")
67
+ p.add_argument("output_dataset", help="HF dataset ID for output (e.g., 'user/output')")
68
+ p.add_argument(
69
+ "--text-column",
70
+ default="text",
71
+ help="Name of the text column to run NER over (default: 'text')",
72
+ )
73
+ p.add_argument(
74
+ "--entity-types",
75
+ nargs="+",
76
+ required=True,
77
+ help="Space-separated entity types to extract (e.g., Person Organization Date)",
78
+ )
79
+ p.add_argument(
80
+ "--gliner-model",
81
+ default="urchade/gliner_multi-v2.1",
82
+ help="GLiNER model ID (default: urchade/gliner_multi-v2.1, multilingual)",
83
+ )
84
+ p.add_argument(
85
+ "--threshold",
86
+ type=float,
87
+ default=0.5,
88
+ help="Confidence threshold for entity extraction (default: 0.5)",
89
+ )
90
+ p.add_argument(
91
+ "--split",
92
+ default="train",
93
+ help="Dataset split to process (default: train)",
94
+ )
95
+ p.add_argument(
96
+ "--max-samples",
97
+ type=int,
98
+ default=None,
99
+ help="Process at most N samples (default: all)",
100
+ )
101
+ p.add_argument(
102
+ "--batch-size",
103
+ type=int,
104
+ default=8,
105
+ help="Batch size for inference (default: 8)",
106
+ )
107
+ p.add_argument(
108
+ "--device",
109
+ choices=["auto", "cpu", "cuda"],
110
+ default="auto",
111
+ help="Device for inference (default: auto — uses CUDA if available)",
112
+ )
113
+ p.add_argument(
114
+ "--max-text-chars",
115
+ type=int,
116
+ default=8000,
117
+ help="Truncate texts longer than this many characters (default: 8000)",
118
+ )
119
+ p.add_argument(
120
+ "--private",
121
+ action="store_true",
122
+ help="Push the output dataset as private",
123
+ )
124
+ return p.parse_args()
125
+
126
+
127
+ def resolve_device(arg: str) -> str:
128
+ if arg == "cpu":
129
+ return "cpu"
130
+ if arg == "cuda":
131
+ if not torch.cuda.is_available():
132
+ log.warning("--device cuda requested but CUDA not available; falling back to CPU")
133
+ return "cpu"
134
+ return "cuda"
135
+ return "cuda" if torch.cuda.is_available() else "cpu"
136
+
137
+
138
+ def normalize_entity(ent: Dict[str, Any]) -> Dict[str, Any]:
139
+ return {
140
+ "start": int(ent["start"]),
141
+ "end": int(ent["end"]),
142
+ "text": str(ent["text"]),
143
+ "label": str(ent["label"]),
144
+ "score": float(ent.get("score", 0.0)),
145
+ }
146
+
147
+
148
+ def build_card(args, n_processed: int, n_entities: int, elapsed_s: float, device: str) -> str:
149
+ return f"""---
150
+ license: apache-2.0
151
+ tags:
152
+ - ner
153
+ - gliner
154
+ - zero-shot
155
+ - bootstrap
156
+ - uv-script
157
+ size_categories:
158
+ - n<10K
159
+ ---
160
+
161
+ # {args.output_dataset}
162
+
163
+ Bootstrap NER dataset produced by [`{args.gliner_model}`](https://huggingface.co/{args.gliner_model}) over [`{args.input_dataset}`](https://huggingface.co/datasets/{args.input_dataset}).
164
+
165
+ Generated using [`uv-scripts/gliner/extract-entities.py`](https://huggingface.co/datasets/uv-scripts/gliner).
166
+
167
+ ## Provenance
168
+
169
+ | | |
170
+ |---|---|
171
+ | Source dataset | `{args.input_dataset}` (split `{args.split}`) |
172
+ | Text column | `{args.text_column}` |
173
+ | Bootstrap model | `{args.gliner_model}` |
174
+ | Entity types | `{', '.join(args.entity_types)}` |
175
+ | Confidence threshold | {args.threshold} |
176
+ | Samples processed | {n_processed} |
177
+ | Total entities extracted | {n_entities} |
178
+ | Inference device | `{device}` |
179
+ | Wall clock | {elapsed_s:.1f}s ({n_processed / max(elapsed_s, 1e-9):.2f} samples/s) |
180
+
181
+ ## Schema
182
+
183
+ Original `{args.input_dataset}` columns plus an `entities` column:
184
+
185
+ ```python
186
+ entities: list of {{
187
+ "start": int, # character offset, inclusive
188
+ "end": int, # character offset, exclusive
189
+ "text": str, # the matched span
190
+ "label": str, # one of {args.entity_types}
191
+ "score": float, # GLiNER confidence in [0, 1]
192
+ }}
193
+ ```
194
+
195
+ ## Caveats
196
+
197
+ - These are **bootstrap labels**, not human-reviewed. Treat low-confidence (< 0.7) entities as candidates for review.
198
+ - GLiNER is zero-shot: changing `--entity-types` changes what it extracts, but quality varies by entity type.
199
+ - Long texts were truncated at {args.max_text_chars} characters before inference.
200
+ """
201
+
202
+
203
+ def main():
204
+ args = parse_args()
205
+ device = resolve_device(args.device)
206
+ log.info("Loading %s split=%s ...", args.input_dataset, args.split)
207
+ ds = load_dataset(args.input_dataset, split=args.split, streaming=False)
208
+
209
+ if args.text_column not in ds.column_names:
210
+ sys.exit(
211
+ f"--text-column '{args.text_column}' not in {ds.column_names}"
212
+ )
213
+
214
+ if args.max_samples is not None:
215
+ n = min(args.max_samples, len(ds))
216
+ ds = ds.select(range(n))
217
+ log.info("Selected %d samples", n)
218
+ else:
219
+ log.info("Processing full split: %d samples", len(ds))
220
+
221
+ log.info("Loading GLiNER %s on %s ...", args.gliner_model, device)
222
+ from gliner import GLiNER
223
+
224
+ model = GLiNER.from_pretrained(args.gliner_model)
225
+ if device == "cuda":
226
+ model = model.to("cuda")
227
+ model.eval()
228
+
229
+ n_entities = 0
230
+ started = time.time()
231
+
232
+ def add_entities(batch: Dict[str, List]) -> Dict[str, List]:
233
+ nonlocal n_entities
234
+ texts = [
235
+ (t or "")[: args.max_text_chars] for t in batch[args.text_column]
236
+ ]
237
+ entities_per_row = []
238
+ for text in texts:
239
+ if not text.strip():
240
+ entities_per_row.append([])
241
+ continue
242
+ try:
243
+ ents = model.predict_entities(
244
+ text, args.entity_types, threshold=args.threshold
245
+ )
246
+ except Exception as e:
247
+ log.warning("predict_entities failed: %s; returning []", e)
248
+ ents = []
249
+ normalized = [normalize_entity(e) for e in ents]
250
+ n_entities += len(normalized)
251
+ entities_per_row.append(normalized)
252
+ return {"entities": entities_per_row}
253
+
254
+ log.info("Running inference (batch_size=%d) ...", args.batch_size)
255
+ ds = ds.map(
256
+ add_entities,
257
+ batched=True,
258
+ batch_size=args.batch_size,
259
+ desc="Extracting entities",
260
+ )
261
+
262
+ elapsed = time.time() - started
263
+ log.info(
264
+ "Done. %d entities across %d samples in %.1fs (%.2f samples/s)",
265
+ n_entities, len(ds), elapsed, len(ds) / max(elapsed, 1e-9),
266
+ )
267
+
268
+ log.info("Pushing to %s ...", args.output_dataset)
269
+ ds.push_to_hub(args.output_dataset, private=args.private)
270
+
271
+ DatasetCard(build_card(args, len(ds), n_entities, elapsed, device)).push_to_hub(
272
+ args.output_dataset, repo_type="dataset"
273
+ )
274
+
275
+ log.info("Done: https://huggingface.co/datasets/%s", args.output_dataset)
276
+
277
+
278
+ if __name__ == "__main__":
279
+ main()