MoritzLaurer HF staff commited on
Commit
893885d
1 Parent(s): 4cb5953

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -12
README.md CHANGED
@@ -125,19 +125,18 @@ information on licenses, the underlying papers etc.: https://github.com/MoritzLa
125
  ## Citation
126
  If you use this model academically, please cite:
127
  ```
128
- @article{laurer_less_2023,
129
- title = {Less {Annotating}, {More} {Classifying}: {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT}-{NLI}},
130
- issn = {1047-1987, 1476-4989},
131
- shorttitle = {Less {Annotating}, {More} {Classifying}},
132
- url = {https://www.cambridge.org/core/product/identifier/S1047198723000207/type/journal_article},
133
- doi = {10.1017/pan.2023.20},
134
- language = {en},
135
- urldate = {2023-06-20},
136
- journal = {Political Analysis},
137
- author = {Laurer, Moritz and Van Atteveldt, Wouter and Casas, Andreu and Welbers, Kasper},
138
- month = jun,
139
  year = {2023},
140
- pages = {1--33},
 
141
  }
142
  ```
143
 
 
125
  ## Citation
126
  If you use this model academically, please cite:
127
  ```
128
+ @misc{laurer_building_2023,
129
+ title = {Building {Efficient} {Universal} {Classifiers} with {Natural} {Language} {Inference}},
130
+ url = {http://arxiv.org/abs/2312.17543},
131
+ doi = {10.48550/arXiv.2312.17543},
132
+ abstract = {Generative Large Language Models (LLMs) have become the mainstream choice for fewshot and zeroshot learning thanks to the universality of text generation. Many users, however, do not need the broad capabilities of generative LLMs when they only want to automate a classification task. Smaller BERT-like models can also learn universal tasks, which allow them to do any text classification task without requiring fine-tuning (zeroshot classification) or to learn new tasks with only a few examples (fewshot), while being significantly more efficient than generative LLMs. This paper (1) explains how Natural Language Inference (NLI) can be used as a universal classification task that follows similar principles as instruction fine-tuning of generative LLMs, (2) provides a step-by-step guide with reusable Jupyter notebooks for building a universal classifier, and (3) shares the resulting universal classifier that is trained on 33 datasets with 389 diverse classes. Parts of the code we share has been used to train our older zeroshot classifiers that have been downloaded more than 55 million times via the Hugging Face Hub as of December 2023. Our new classifier improves zeroshot performance by 9.4\%.},
133
+ urldate = {2024-01-05},
134
+ publisher = {arXiv},
135
+ author = {Laurer, Moritz and van Atteveldt, Wouter and Casas, Andreu and Welbers, Kasper},
136
+ month = dec,
 
 
137
  year = {2023},
138
+ note = {arXiv:2312.17543 [cs]},
139
+ keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language},
140
  }
141
  ```
142