julien-c HF staff commited on
Commit
57557c5
1 Parent(s): 7f65ce0

Add description to card metadata

Browse files

TER (Translation Edit Rate, also called Translation Error Rate) is a metric to quantify the edit operations that a
hypothesis requires to match a reference translation. We use the implementation that is already present in sacrebleu
(https://github.com/mjpost/sacreBLEU#ter), which in turn is inspired by the TERCOM implementation, which can be found
here: https://github.com/jhclark/tercom.

The implementation here is slightly different from sacrebleu in terms of the required input format. The length of
the references and hypotheses lists need to be the same, so you may need to transpose your references compared to
sacrebleu's required input format. See https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534

See the README.md file at https://github.com/mjpost/sacreBLEU#ter for more information.

Files changed (1) hide show
  1. README.md +28 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: TER
3
- emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
@@ -8,10 +8,34 @@ sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
- - evaluate
12
- - metric
13
- ---
 
 
 
 
 
 
 
 
 
 
 
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  # Metric Card for TER
16
 
17
  ## Metric Description
 
1
  ---
2
  title: TER
3
+ emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
 
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
+ - evaluate
12
+ - metric
13
+ description: >-
14
+ TER (Translation Edit Rate, also called Translation Error Rate) is a metric to
15
+ quantify the edit operations that a
16
+
17
+ hypothesis requires to match a reference translation. We use the
18
+ implementation that is already present in sacrebleu
19
+
20
+ (https://github.com/mjpost/sacreBLEU#ter), which in turn is inspired by the
21
+ TERCOM implementation, which can be found
22
+
23
+ here: https://github.com/jhclark/tercom.
24
+
25
 
26
+ The implementation here is slightly different from sacrebleu in terms of the
27
+ required input format. The length of
28
+
29
+ the references and hypotheses lists need to be the same, so you may need to
30
+ transpose your references compared to
31
+
32
+ sacrebleu's required input format. See
33
+ https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534
34
+
35
+
36
+ See the README.md file at https://github.com/mjpost/sacreBLEU#ter for more
37
+ information.
38
+ ---
39
  # Metric Card for TER
40
 
41
  ## Metric Description