lvwerra HF staff commited on
Commit
804f5e1
1 Parent(s): e0851a0

Update Space (evaluate main: 828c6327)

Browse files
Files changed (4) hide show
  1. README.md +141 -4
  2. app.py +6 -0
  3. requirements.txt +4 -0
  4. seqeval.py +164 -0
README.md CHANGED
@@ -1,12 +1,149 @@
1
  ---
2
- title: Seqeval
3
- emoji: 🐠
4
- colorFrom: pink
5
  colorTo: red
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: seqeval
3
+ emoji: 🤗
4
+ colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
  ---
14
 
15
+ # Metric Card for seqeval
16
+
17
+ ## Metric description
18
+
19
+ seqeval is a Python framework for sequence labeling evaluation. seqeval can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.
20
+
21
+
22
+ ## How to use
23
+
24
+ Seqeval produces labelling scores along with its sufficient statistics from a source against one or more references.
25
+
26
+ It takes two mandatory arguments:
27
+
28
+ `predictions`: a list of lists of predicted labels, i.e. estimated targets as returned by a tagger.
29
+
30
+ `references`: a list of lists of reference labels, i.e. the ground truth/target values.
31
+
32
+ It can also take several optional arguments:
33
+
34
+ `suffix` (boolean): `True` if the IOB tag is a suffix (after type) instead of a prefix (before type), `False` otherwise. The default value is `False`, i.e. the IOB tag is a prefix (before type).
35
+
36
+ `scheme`: the target tagging scheme, which can be one of [`IOB1`, `IOB2`, `IOE1`, `IOE2`, `IOBES`, `BILOU`]. The default value is `None`.
37
+
38
+ `mode`: whether to count correct entity labels with incorrect I/B tags as true positives or not. If you want to only count exact matches, pass `mode="strict"` and a specific `scheme` value. The default is `None`.
39
+
40
+ `sample_weight`: An array-like of shape (n_samples,) that provides weights for individual samples. The default is `None`.
41
+
42
+ `zero_division`: Which value to substitute as a metric value when encountering zero division. Should be one of [`0`,`1`,`"warn"`]. `"warn"` acts as `0`, but the warning is raised.
43
+
44
+
45
+ ```python
46
+ >>> seqeval = evaluate.load('seqeval')
47
+ >>> predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
48
+ >>> references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
49
+ >>> results = seqeval.compute(predictions=predictions, references=references)
50
+ ```
51
+
52
+ ## Output values
53
+
54
+ This metric returns a dictionary with a summary of scores for overall and per type:
55
+
56
+ Overall:
57
+
58
+ `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
59
+
60
+ `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
61
+
62
+ `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
63
+
64
+ `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
65
+
66
+ Per type (e.g. `MISC`, `PER`, `LOC`,...):
67
+
68
+ `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
69
+
70
+ `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
71
+
72
+ `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
73
+
74
+
75
+ ### Values from popular papers
76
+ The 1995 "Text Chunking using Transformation-Based Learning" [paper](https://aclanthology.org/W95-0107) reported a baseline recall of 81.9% and a precision of 78.2% using non Deep Learning-based methods.
77
+
78
+ More recently, seqeval continues being used for reporting performance on tasks such as [named entity detection](https://www.mdpi.com/2306-5729/6/8/84/htm) and [information extraction](https://ieeexplore.ieee.org/abstract/document/9697942/).
79
+
80
+
81
+ ## Examples
82
+
83
+ Maximal values (full match) :
84
+
85
+ ```python
86
+ >>> seqeval = evaluate.load('seqeval')
87
+ >>> predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
88
+ >>> references = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
89
+ >>> results = seqeval.compute(predictions=predictions, references=references)
90
+ >>> print(results)
91
+ {'MISC': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 1.0, 'overall_recall': 1.0, 'overall_f1': 1.0, 'overall_accuracy': 1.0}
92
+
93
+ ```
94
+
95
+ Minimal values (no match):
96
+
97
+ ```python
98
+ >>> seqeval = evaluate.load('seqeval')
99
+ >>> predictions = [['O', 'B-MISC', 'I-MISC'], ['B-PER', 'I-PER', 'O']]
100
+ >>> references = [['B-MISC', 'O', 'O'], ['I-PER', '0', 'I-PER']]
101
+ >>> results = seqeval.compute(predictions=predictions, references=references)
102
+ >>> print(results)
103
+ {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}, 'PER': {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}, '_': {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}, 'overall_precision': 0.0, 'overall_recall': 0.0, 'overall_f1': 0.0, 'overall_accuracy': 0.0}
104
+ ```
105
+
106
+ Partial match:
107
+
108
+ ```python
109
+ >>> seqeval = evaluate.load('seqeval')
110
+ >>> predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
111
+ >>> references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
112
+ >>> results = seqeval.compute(predictions=predictions, references=references)
113
+ >>> print(results)
114
+ {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
115
+ ```
116
+
117
+ ## Limitations and bias
118
+
119
+ seqeval supports following IOB formats (short for inside, outside, beginning) : `IOB1`, `IOB2`, `IOE1`, `IOE2`, `IOBES`, `IOBES` (only in strict mode) and `BILOU` (only in strict mode).
120
+
121
+ For more information about IOB formats, refer to the [Wikipedia page](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) and the description of the [CoNLL-2000 shared task](https://aclanthology.org/W02-2024).
122
+
123
+
124
+ ## Citation
125
+
126
+ ```bibtex
127
+ @inproceedings{ramshaw-marcus-1995-text,
128
+ title = "Text Chunking using Transformation-Based Learning",
129
+ author = "Ramshaw, Lance and
130
+ Marcus, Mitch",
131
+ booktitle = "Third Workshop on Very Large Corpora",
132
+ year = "1995",
133
+ url = "https://www.aclweb.org/anthology/W95-0107",
134
+ }
135
+ ```
136
+
137
+ ```bibtex
138
+ @misc{seqeval,
139
+ title={{seqeval}: A Python framework for sequence labeling evaluation},
140
+ url={https://github.com/chakki-works/seqeval},
141
+ note={Software available from https://github.com/chakki-works/seqeval},
142
+ author={Hiroki Nakayama},
143
+ year={2018},
144
+ }
145
+ ```
146
+
147
+ ## Further References
148
+ - [README for seqeval at GitHub](https://github.com/chakki-works/seqeval)
149
+ - [CoNLL-2000 shared task](https://www.clips.uantwerpen.be/conll2002/ner/bin/conlleval.txt)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("seqeval")
6
+ launch_gradio_widget(module)
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # TODO: fix github to release
2
+ git+https://github.com/huggingface/evaluate.git@b6e6ed7f3e6844b297bff1b43a1b4be0709b9671
3
+ datasets~=2.0
4
+ seqeval
seqeval.py ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Evaluate Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """ seqeval metric. """
15
+
16
+ import importlib
17
+ from typing import List, Optional, Union
18
+
19
+ import datasets
20
+ from seqeval.metrics import accuracy_score, classification_report
21
+
22
+ import evaluate
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{ramshaw-marcus-1995-text,
27
+ title = "Text Chunking using Transformation-Based Learning",
28
+ author = "Ramshaw, Lance and
29
+ Marcus, Mitch",
30
+ booktitle = "Third Workshop on Very Large Corpora",
31
+ year = "1995",
32
+ url = "https://www.aclweb.org/anthology/W95-0107",
33
+ }
34
+ @misc{seqeval,
35
+ title={{seqeval}: A Python framework for sequence labeling evaluation},
36
+ url={https://github.com/chakki-works/seqeval},
37
+ note={Software available from https://github.com/chakki-works/seqeval},
38
+ author={Hiroki Nakayama},
39
+ year={2018},
40
+ }
41
+ """
42
+
43
+ _DESCRIPTION = """\
44
+ seqeval is a Python framework for sequence labeling evaluation.
45
+ seqeval can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.
46
+
47
+ This is well-tested by using the Perl script conlleval, which can be used for
48
+ measuring the performance of a system that has processed the CoNLL-2000 shared task data.
49
+
50
+ seqeval supports following formats:
51
+ IOB1
52
+ IOB2
53
+ IOE1
54
+ IOE2
55
+ IOBES
56
+
57
+ See the [README.md] file at https://github.com/chakki-works/seqeval for more information.
58
+ """
59
+
60
+ _KWARGS_DESCRIPTION = """
61
+ Produces labelling scores along with its sufficient statistics
62
+ from a source against one or more references.
63
+
64
+ Args:
65
+ predictions: List of List of predicted labels (Estimated targets as returned by a tagger)
66
+ references: List of List of reference labels (Ground truth (correct) target values)
67
+ suffix: True if the IOB prefix is after type, False otherwise. default: False
68
+ scheme: Specify target tagging scheme. Should be one of ["IOB1", "IOB2", "IOE1", "IOE2", "IOBES", "BILOU"].
69
+ default: None
70
+ mode: Whether to count correct entity labels with incorrect I/B tags as true positives or not.
71
+ If you want to only count exact matches, pass mode="strict". default: None.
72
+ sample_weight: Array-like of shape (n_samples,), weights for individual samples. default: None
73
+ zero_division: Which value to substitute as a metric value when encountering zero division. Should be on of 0, 1,
74
+ "warn". "warn" acts as 0, but the warning is raised.
75
+
76
+ Returns:
77
+ 'scores': dict. Summary of the scores for overall and per type
78
+ Overall:
79
+ 'accuracy': accuracy,
80
+ 'precision': precision,
81
+ 'recall': recall,
82
+ 'f1': F1 score, also known as balanced F-score or F-measure,
83
+ Per type:
84
+ 'precision': precision,
85
+ 'recall': recall,
86
+ 'f1': F1 score, also known as balanced F-score or F-measure
87
+ Examples:
88
+
89
+ >>> predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
90
+ >>> references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
91
+ >>> seqeval = evaluate.load("seqeval")
92
+ >>> results = seqeval.compute(predictions=predictions, references=references)
93
+ >>> print(list(results.keys()))
94
+ ['MISC', 'PER', 'overall_precision', 'overall_recall', 'overall_f1', 'overall_accuracy']
95
+ >>> print(results["overall_f1"])
96
+ 0.5
97
+ >>> print(results["PER"]["f1"])
98
+ 1.0
99
+ """
100
+
101
+
102
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
103
+ class Seqeval(evaluate.EvaluationModule):
104
+ def _info(self):
105
+ return evaluate.EvaluationModuleInfo(
106
+ description=_DESCRIPTION,
107
+ citation=_CITATION,
108
+ homepage="https://github.com/chakki-works/seqeval",
109
+ inputs_description=_KWARGS_DESCRIPTION,
110
+ features=datasets.Features(
111
+ {
112
+ "predictions": datasets.Sequence(datasets.Value("string", id="label"), id="sequence"),
113
+ "references": datasets.Sequence(datasets.Value("string", id="label"), id="sequence"),
114
+ }
115
+ ),
116
+ codebase_urls=["https://github.com/chakki-works/seqeval"],
117
+ reference_urls=["https://github.com/chakki-works/seqeval"],
118
+ )
119
+
120
+ def _compute(
121
+ self,
122
+ predictions,
123
+ references,
124
+ suffix: bool = False,
125
+ scheme: Optional[str] = None,
126
+ mode: Optional[str] = None,
127
+ sample_weight: Optional[List[int]] = None,
128
+ zero_division: Union[str, int] = "warn",
129
+ ):
130
+ if scheme is not None:
131
+ try:
132
+ scheme_module = importlib.import_module("seqeval.scheme")
133
+ scheme = getattr(scheme_module, scheme)
134
+ except AttributeError:
135
+ raise ValueError(f"Scheme should be one of [IOB1, IOB2, IOE1, IOE2, IOBES, BILOU], got {scheme}")
136
+ report = classification_report(
137
+ y_true=references,
138
+ y_pred=predictions,
139
+ suffix=suffix,
140
+ output_dict=True,
141
+ scheme=scheme,
142
+ mode=mode,
143
+ sample_weight=sample_weight,
144
+ zero_division=zero_division,
145
+ )
146
+ report.pop("macro avg")
147
+ report.pop("weighted avg")
148
+ overall_score = report.pop("micro avg")
149
+
150
+ scores = {
151
+ type_name: {
152
+ "precision": score["precision"],
153
+ "recall": score["recall"],
154
+ "f1": score["f1-score"],
155
+ "number": score["support"],
156
+ }
157
+ for type_name, score in report.items()
158
+ }
159
+ scores["overall_precision"] = overall_score["precision"]
160
+ scores["overall_recall"] = overall_score["recall"]
161
+ scores["overall_f1"] = overall_score["f1-score"]
162
+ scores["overall_accuracy"] = accuracy_score(y_true=references, y_pred=predictions)
163
+
164
+ return scores