Datasets:
GEM
/

Tasks:
Other
Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
Sebastian Gehrmann commited on
Commit
91789cf
1 Parent(s): 7b16f35

Data Card.

Browse files
Files changed (2) hide show
  1. README.md +1 -1
  2. common_gen.json +1 -1
README.md CHANGED
@@ -416,7 +416,7 @@ The currently best performing model KFCNet (https://aclanthology.org/2021.findin
416
 
417
  <!-- info: What are the most relevant previous results for this task/dataset? -->
418
  <!-- scope: microscope -->
419
- The most relevant results can be seen on the leaderboard at https://inklab.usc.edu/CommonGen/leaderboard.html
420
 
421
 
422
 
 
416
 
417
  <!-- info: What are the most relevant previous results for this task/dataset? -->
418
  <!-- scope: microscope -->
419
+ The most relevant results can be seen on the [leaderboard](https://inklab.usc.edu/CommonGen/leaderboard.html)
420
 
421
 
422
 
common_gen.json CHANGED
@@ -136,7 +136,7 @@
136
  "other-metrics-definitions": "- SPICE: An evaluation metric for image captioning that is defined over scene graphs\n- CIDEr: An n-gram overlap metric based on cosine similarity between the TF-IDF weighted ngram counts\n",
137
  "has-previous-results": "yes",
138
  "current-evaluation": "The currently best performing model KFCNet (https://aclanthology.org/2021.findings-emnlp.249/) uses the same automatic evaluation but does not conduct any human evaluation. ",
139
- "previous-results": "The most relevant results can be seen on the leaderboard at https://inklab.usc.edu/CommonGen/leaderboard.html",
140
  "model-abilities": "Commonsense Reasoning",
141
  "metrics": [
142
  "Other: Other Metrics",
 
136
  "other-metrics-definitions": "- SPICE: An evaluation metric for image captioning that is defined over scene graphs\n- CIDEr: An n-gram overlap metric based on cosine similarity between the TF-IDF weighted ngram counts\n",
137
  "has-previous-results": "yes",
138
  "current-evaluation": "The currently best performing model KFCNet (https://aclanthology.org/2021.findings-emnlp.249/) uses the same automatic evaluation but does not conduct any human evaluation. ",
139
+ "previous-results": "The most relevant results can be seen on the [leaderboard](https://inklab.usc.edu/CommonGen/leaderboard.html)",
140
  "model-abilities": "Commonsense Reasoning",
141
  "metrics": [
142
  "Other: Other Metrics",