suchirsalhan commited on
Commit
5aa89c4
1 Parent(s): a601fc0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -1,3 +1,47 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - zh
5
+ tags:
6
+ - Syntax
7
+ pretty_name: SLING
8
+ size_categories:
9
+ - n<1K
10
  ---
11
+ # CLiMP: A Benchmark for Chinese Language Model Evaluation
12
+
13
+ [![arxiv](https://img.shields.io/badge/arXiv-2210.11689-b31b1b.svg)](https://arxiv.org/pdf/2101.11131v1.pdf)
14
+
15
+ This is the official SLING dataset, accompanying the EACL 2021 paper "CLiMP: A Benchmark for Chinese Language Model Evaluation" by Beilei Xiang,1 Changbing Yang,1 Yu Li,1 Alex Warstadt2
16
+ and Katharina Kann1.
17
+
18
+ You can find the paper on [arxiv](https://arxiv.org/pdf/2101.11131v1.pdf).
19
+
20
+ We use this dataset for evaluation of a small-scale Chinese Language Model for the [BabyLM Challenge](https://babylm.github.io/).
21
+
22
+ ## Citation Information
23
+
24
+ If you use CLiMP, please cite the **original paper** as follows:
25
+
26
+ ```
27
+ @inproceedings{xiang-etal-2021-climp,
28
+ title = "{CL}i{MP}: A Benchmark for {C}hinese Language Model Evaluation",
29
+ author = "Xiang, Beilei and
30
+ Yang, Changbing and
31
+ Li, Yu and
32
+ Warstadt, Alex and
33
+ Kann, Katharina",
34
+ editor = "Merlo, Paola and
35
+ Tiedemann, Jorg and
36
+ Tsarfaty, Reut",
37
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
38
+ month = apr,
39
+ year = "2021",
40
+ address = "Online",
41
+ publisher = "Association for Computational Linguistics",
42
+ url = "https://aclanthology.org/2021.eacl-main.242",
43
+ doi = "10.18653/v1/2021.eacl-main.242",
44
+ pages = "2784--2790",
45
+ abstract = "Linguistically informed analyses of language models (LMs) contribute to the understanding and improvement of such models. Here, we introduce the corpus of Chinese linguistic minimal pairs (CLiMP) to investigate what knowledge Chinese LMs acquire. CLiMP consists of sets of 1000 minimal pairs (MPs) for 16 syntactic contrasts in Chinese, covering 9 major Chinese linguistic phenomena. The MPs are semi-automatically generated, and human agreement with the labels in CLiMP is 95.8{\%}. We evaluate 11 different LMs on CLiMP, covering n-grams, LSTMs, and Chinese BERT. We find that classifier{--}noun agreement and verb complement selection are the phenomena that models generally perform best at. However, models struggle the most with the ba construction, binding, and filler-gap dependencies. Overall, Chinese BERT achieves an 81.8{\%} average accuracy, while the performances of LSTMs and 5-grams are only moderately above chance level.",
46
+ }
47
+ ```