cakiki commited on
Commit
57855ac
1 Parent(s): c4fa370

Fill README

Browse files
Files changed (2) hide show
  1. .gitattributes +1 -0
  2. README.md +42 -2
.gitattributes CHANGED
@@ -14,3 +14,4 @@
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
 
 
14
  *.pb filter=lfs diff=lfs merge=lfs -text
15
  *.pt filter=lfs diff=lfs merge=lfs -text
16
  *.pth filter=lfs diff=lfs merge=lfs -text
17
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -33,7 +33,26 @@ def foo(bar)
33
 
34
  #### Limitations and bias
35
 
36
- Provide examples of latent issues and potential remediations.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  ## Training data
39
 
@@ -59,12 +78,33 @@ https://huggingface.co/datasets/german-nlp-group/german_common_crawl
59
 
60
  ## Training procedure
61
 
62
- TODO (See training.md)
63
 
64
  ## Eval results
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ### BibTeX entry and citation info
67
 
 
 
68
  ```bibtex
69
  @inproceedings{...,
70
  year={2021}
 
33
 
34
  #### Limitations and bias
35
 
36
+ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?? https://dl.acm.org/doi/10.1145/3442188.3445922
37
+
38
+ ```
39
+ @inproceedings{10.1145/3442188.3445922,
40
+ author = {Bender, Emily M. and Gebru, Timnit and McMillan-Major, Angelina and Shmitchell, Shmargaret},
41
+ title = {On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ??},
42
+ year = {2021},
43
+ isbn = {9781450383097},
44
+ publisher = {Association for Computing Machinery},
45
+ address = {New York, NY, USA},
46
+ url = {https://doi.org/10.1145/3442188.3445922},
47
+ doi = {10.1145/3442188.3445922},
48
+ abstract = {The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.},
49
+ booktitle = {Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency},
50
+ pages = {610?623},
51
+ numpages = {14},
52
+ location = {Virtual Event, Canada},
53
+ series = {FAccT '21}
54
+ }
55
+ ```
56
 
57
  ## Training data
58
 
 
78
 
79
  ## Training procedure
80
 
81
+ TODO (See [training](training.md))
82
 
83
  ## Eval results
84
 
85
+ TODO: Self-BLEU, Diversity, and other metrics from https://arxiv.org/abs/1904.09751
86
+ ```
87
+ @inproceedings{DBLP:conf/iclr/HoltzmanBDFC20,
88
+ author = {Ari Holtzman and
89
+ Jan Buys and
90
+ Li Du and
91
+ Maxwell Forbes and
92
+ Yejin Choi},
93
+ title = {The Curious Case of Neural Text Degeneration},
94
+ booktitle = {8th International Conference on Learning Representations, {ICLR} 2020,
95
+ Addis Ababa, Ethiopia, April 26-30, 2020},
96
+ publisher = {OpenReview.net},
97
+ year = {2020},
98
+ url = {https://openreview.net/forum?id=rygGQyrFvH},
99
+ timestamp = {Thu, 21 Jan 2021 17:36:46 +0100},
100
+ biburl = {https://dblp.org/rec/conf/iclr/HoltzmanBDFC20.bib},
101
+ bibsource = {dblp computer science bibliography, https://dblp.org}
102
+ }
103
+ ```
104
  ### BibTeX entry and citation info
105
 
106
+ Does the Huggingface hub generate DOIs? Otherwise maybe Kaggle or Zenodo to generate one.
107
+
108
  ```bibtex
109
  @inproceedings{...,
110
  year={2021}