lvwerra HF staff commited on
Commit
056532a
1 Parent(s): 02d56d6

Update Space (evaluate main: e179b5b8)

Browse files
Files changed (1) hide show
  1. README.md +4 -35
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: METEOR
3
- emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
@@ -8,41 +8,10 @@ sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
- - evaluate
12
- - metric
13
- description: >-
14
- METEOR, an automatic metric for machine translation evaluation
15
-
16
- that is based on a generalized concept of unigram matching between the
17
-
18
- machine-produced translation and human-produced reference translations.
19
-
20
- Unigrams can be matched based on their surface forms, stemmed forms,
21
-
22
- and meanings; furthermore, METEOR can be easily extended to include more
23
-
24
- advanced matching strategies. Once all generalized unigram matches
25
-
26
- between the two strings have been found, METEOR computes a score for
27
-
28
- this matching using a combination of unigram-precision, unigram-recall, and
29
-
30
- a measure of fragmentation that is designed to directly capture how
31
-
32
- well-ordered the matched words in the machine translation are in relation
33
-
34
- to the reference.
35
-
36
-
37
- METEOR gets an R correlation value of 0.347 with human evaluation on the
38
- Arabic
39
-
40
- data and 0.331 on the Chinese data. This is shown to be an improvement on
41
-
42
- using simply unigram-precision, unigram-recall and their harmonic F1
43
-
44
- combination.
45
  ---
 
46
  # Metric Card for METEOR
47
 
48
  ## Metric description
 
1
  ---
2
  title: METEOR
3
+ emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
 
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
+ - evaluate
12
+ - metric
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
+
15
  # Metric Card for METEOR
16
 
17
  ## Metric description