Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
extended
ArXiv:
Tags:
License:
kiddothe2b commited on
Commit
733b673
1 Parent(s): 5fb0190

Update LexGLUE README.md (#4285)

Browse files

Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.

Commit from https://github.com/huggingface/datasets/commit/746ab8cc7975b75496b9eca5d9d10cc913ba9494

Files changed (1) hide show
  1. README.md +41 -25
README.md CHANGED
@@ -135,19 +135,41 @@ The UNFAIR-ToS dataset contains 50 Terms of Service (ToS) from on-line platforms
135
  The CaseHOLD (Case Holdings on Legal Decisions) dataset includes multiple choice questions about holdings of US court cases from the Harvard Law Library case law corpus. Holdings are short summaries of legal rulings accompany referenced decisions relevant for the present case. The input consists of an excerpt (or prompt) from a court decision, containing a reference to a particular case, while the holding statement is masked out. The model must identify the correct (masked) holding statement from a selection of five choices.
136
 
137
 
138
- The current leaderboard includes several Transformer-based (Vaswaniet al., 2017) pre-trained language models, which achieve state-of-the-art performance in most NLP tasks (Bommasani et al., 2021) and NLU benchmarks (Wang et al., 2019a).
139
 
 
140
 
141
  <table>
142
- <tr><td>Dataset</td><td>ECtHR Task A </td><td>ECtHR Task B </td><td>SCOTUS </td><td>EUR-LEX</td><td>LEDGAR </td><td>UNFAIR-ToS </td><td>CaseHOLD</td></tr>
143
- <tr><td>Model</td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1</td><td>μ-F1 / m-F1 </td></tr>
144
- <tr><td>BERT </td><td><b>71.4</b> / 64.0 </td><td>79.6 / <b>78.3</b> </td><td>70.5 / 60.9 </td><td>71.6 / 55.6 </td><td>87.7 / 82.2 </td><td>97.3 / 80.4</td><td>70.7 </td></tr>
145
- <tr><td>RoBERTa </td><td>69.5 / 60.7 </td><td>78.6 / 77.0 </td><td>70.8 / 61.2 </td><td>71.8 / <b>57.5</b> </td><td>87.9 / 82.1 </td><td>97.2 / 79.6</td><td>71.7 </td></tr>
146
- <tr><td>DeBERTa </td><td>69.1 / 61.2 </td><td>79.9 / <b>78.3</b> </td><td>70.0 / 60.0 </td><td><b>72.3</b> / 57.2 </td><td>87.9 / 82.0 </td><td>97.2 / 80.2</td><td>72.1 </td></tr>
147
- <tr><td>Longformer </td><td>69.6 / 62.4 </td><td>78.8 / 75.8 </td><td>72.2 / 62.5 </td><td>71.9 / 56.7 </td><td>87.7 / 82.3 </td><td><b>97.5</b> / 81.0</td><td>72.0 </td></tr>
148
- <tr><td>BigBird </td><td>70.5 / 63.8 </td><td> 79.9 / 76.9 </td><td>71.7 / 61.4 </td><td>71.8 / 56.6 </td><td>87.7 / 82.1 </td><td>97.4 / 81.1</td><td>70.4 </td></tr>
149
- <tr><td>Legal-BERT </td><td>71.2 / <b>64.6</b> </td><td><b>80.6</b> / 77.2 </td><td>76.2 / 65.8 </td><td>72.2 / 56.2 </td><td><b>88.1</b> / <b>82.7</b></td><td> 97.4 / <b>83.4</b></td><td>75.1</td></tr>
150
- <tr><td>CaseLaw-BERT </td><td>71.2 / 64.2 </td><td>79.7 / 76.8 </td><td><b>76.4</b> / <b>66.2</b> </td><td>71.0 / 55.9 </td><td>88.0 / 82.3</td><td>97.4 / 82.4</td><td><b>75.6</b> </td></tr>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  </table>
152
 
153
  ### Languages
@@ -369,7 +391,7 @@ An example of 'test' looks as follows.
369
 
370
  *Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
371
  *LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
372
- *Arxiv Preprint. 2021*
373
 
374
 
375
  ### Licensing Information
@@ -380,22 +402,16 @@ An example of 'test' looks as follows.
380
 
381
  [*Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
382
  *LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
383
- *2021. arXiv: 2110.00976.*](https://arxiv.org/abs/2110.00976)
384
  ```
385
- @article{chalkidis-etal-2021-lexglue,
386
- title={{LexGLUE}: A Benchmark Dataset for Legal Language Understanding in English},
387
- author={Chalkidis, Ilias and
388
- Jana, Abhik and
389
- Hartung, Dirk and
390
- Bommarito, Michael and
391
- Androutsopoulos, Ion and
392
- Katz, Daniel Martin and
393
  Aletras, Nikolaos},
394
- year={2021},
395
- eprint={2110.00976},
396
- archivePrefix={arXiv},
397
- primaryClass={cs.CL},
398
- note = {arXiv: 2110.00976},
399
  }
400
  ```
401
 
 
135
  The CaseHOLD (Case Holdings on Legal Decisions) dataset includes multiple choice questions about holdings of US court cases from the Harvard Law Library case law corpus. Holdings are short summaries of legal rulings accompany referenced decisions relevant for the present case. The input consists of an excerpt (or prompt) from a court decision, containing a reference to a particular case, while the holding statement is masked out. The model must identify the correct (masked) holding statement from a selection of five choices.
136
 
137
 
138
+ The current leaderboard includes several Transformer-based (Vaswaniet al., 2017) pre-trained language models, which achieve state-of-the-art performance in most NLP tasks (Bommasani et al., 2021) and NLU benchmarks (Wang et al., 2019a). Results reported by [Chalkidis et al. (2021)](https://arxiv.org/abs/2110.00976):
139
 
140
+ *Task-wise Test Results*
141
 
142
  <table>
143
+ <tr><td><b>Dataset</b></td><td><b>ECtHR A</b></td><td><b>ECtHR B</b></td><td><b>SCOTUS</b></td><td><b>EUR-LEX</b></td><td><b>LEDGAR</b></td><td><b>UNFAIR-ToS</b></td><td><b>CaseHOLD</b></td></tr>
144
+ <tr><td><b>Model</b></td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1</td><td>μ-F1 / m-F1 </td></tr>
145
+ <tr><td>TFIDF+SVM</td><td> 64.7 / 51.7 </td><td>74.6 / 65.1 </td><td> <b>78.2</b> / <b>69.5</b> </td><td>71.3 / 51.4 </td><td>87.2 / 82.4 </td><td>95.4 / 78.8</td><td>n/a </td></tr>
146
+ <tr><td colspan="8" style='text-align:center'><b>Medium-sized Models (L=12, H=768, A=12)</b></td></tr>
147
+ <td>BERT</td> <td> 71.2 / 63.6 </td> <td> 79.7 / 73.4 </td> <td> 68.3 / 58.3 </td> <td> 71.4 / 57.2 </td> <td> 87.6 / 81.8 </td> <td> 95.6 / 81.3 </td> <td> 70.8 </td> </tr>
148
+ <td>RoBERTa</td> <td> 69.2 / 59.0 </td> <td> 77.3 / 68.9 </td> <td> 71.6 / 62.0 </td> <td> 71.9 / <b>57.9</b> </td> <td> 87.9 / 82.3 </td> <td> 95.2 / 79.2 </td> <td> 71.4 </td> </tr>
149
+ <td>DeBERTa</td> <td> 70.0 / 60.8 </td> <td> 78.8 / 71.0 </td> <td> 71.1 / 62.7 </td> <td> <b>72.1</b> / 57.4 </td> <td> 88.2 / 83.1 </td> <td> 95.5 / 80.3 </td> <td> 72.6 </td> </tr>
150
+ <td>Longformer</td> <td> 69.9 / 64.7 </td> <td> 79.4 / 71.7 </td> <td> 72.9 / 64.0 </td> <td> 71.6 / 57.7 </td> <td> 88.2 / 83.0 </td> <td> 95.5 / 80.9 </td> <td> 71.9 </td> </tr>
151
+ <td>BigBird</td> <td> 70.0 / 62.9 </td> <td> 78.8 / 70.9 </td> <td> 72.8 / 62.0 </td> <td> 71.5 / 56.8 </td> <td> 87.8 / 82.6 </td> <td> 95.7 / 81.3 </td> <td> 70.8 </td> </tr>
152
+ <td>Legal-BERT</td> <td> 70.0 / 64.0 </td> <td> <b>80.4</b> / <b>74.7</b> </td> <td> 76.4 / 66.5 </td> <td> <b>72.1</b> / 57.4 </td> <td> 88.2 / 83.0 </td> <td> <b>96.0</b> / <b>83.0</b> </td> <td> 75.3 </td> </tr>
153
+ <td>CaseLaw-BERT</td> <td> 69.8 / 62.9 </td> <td> 78.8 / 70.3 </td> <td> 76.6 / 65.9 </td> <td> 70.7 / 56.6 </td> <td> 88.3 / 83.0 </td> <td> <b>96.0</b> / 82.3 </td> <td> <b>75.4</b> </td> </tr>
154
+ <tr><td colspan="8" style='text-align:center'><b>Large-sized Models (L=24, H=1024, A=18)</b></td></tr>
155
+ <tr><td>RoBERTa</td> <td> <b>73.8</b> / <b>67.6</b> </td> <td> 79.8 / 71.6 </td> <td> 75.5 / 66.3 </td> <td> 67.9 / 50.3 </td> <td> <b>88.6</b> / <b>83.6</b> </td> <td> 95.8 / 81.6 </td> <td> 74.4 </td> </tr>
156
+ </table>
157
+
158
+ *Averaged (Mean over Tasks) Test Results*
159
+
160
+ <table>
161
+ <tr><td><b>Averaging</b></td><td><b>Arithmetic</b></td><td><b>Harmonic</b></td><td><b>Geometric</b></td></tr>
162
+ <tr><td><b>Model</b></td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td></tr>
163
+ <tr><td colspan="4" style='text-align:center'><b>Medium-sized Models (L=12, H=768, A=12)</b></td></tr>
164
+ <tr><td>BERT</td><td> 77.8 / 69.5 </td><td> 76.7 / 68.2 </td><td> 77.2 / 68.8 </td></tr>
165
+ <tr><td>RoBERTa</td><td> 77.8 / 68.7 </td><td> 76.8 / 67.5 </td><td> 77.3 / 68.1 </td></tr>
166
+ <tr><td>DeBERTa</td><td> 78.3 / 69.7 </td><td> 77.4 / 68.5 </td><td> 77.8 / 69.1 </td></tr>
167
+ <tr><td>Longformer</td><td> 78.5 / 70.5 </td><td> 77.5 / 69.5 </td><td> 78.0 / 70.0 </td></tr>
168
+ <tr><td>BigBird</td><td> 78.2 / 69.6 </td><td> 77.2 / 68.5 </td><td> 77.7 / 69.0 </td></tr>
169
+ <tr><td>Legal-BERT</td><td> <b>79.8</b> / <b>72.0</b> </td><td> <b>78.9</b> / <b>70.8</b> </td><td> <b>79.3</b> / <b>71.4</b> </td></tr>
170
+ <tr><td>CaseLaw-BERT</td><td> 79.4 / 70.9 </td><td> 78.5 / 69.7 </td><td> 78.9 / 70.3 </td></tr>
171
+ <tr><td colspan="4" style='text-align:center'><b>Large-sized Models (L=24, H=1024, A=18)</b></td></tr>
172
+ <tr><td>RoBERTa</td><td> 79.4 / 70.8 </td><td> 78.4 / 69.1 </td><td> 78.9 / 70.0 </td></tr>
173
  </table>
174
 
175
  ### Languages
 
391
 
392
  *Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
393
  *LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
394
+ *2022. In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland.*
395
 
396
 
397
  ### Licensing Information
 
402
 
403
  [*Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
404
  *LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
405
+ *2022. In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland.*](https://arxiv.org/abs/2110.00976)
406
  ```
407
+ @inproceedings{chalkidis-etal-2021-lexglue,
408
+ title={LexGLUE: A Benchmark Dataset for Legal Language Understanding in English},
409
+ author={Chalkidis, Ilias and Jana, Abhik and Hartung, Dirk and
410
+ Bommarito, Michael and Androutsopoulos, Ion and Katz, Daniel Martin and
 
 
 
 
411
  Aletras, Nikolaos},
412
+ year={2022},
413
+ booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
414
+ address={Dubln, Ireland},
 
 
415
  }
416
  ```
417