Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -85,7 +85,7 @@ StarCoder2-15B model is a 15B parameter model trained on 600+ programming langua
85
  The model was trained with [NVIDIA NeMo™ Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/) using the [NVIDIA Eos Supercomputer](https://blogs.nvidia.com/blog/eos/) built with [NVIDIA DGX H100](https://www.nvidia.com/en-us/data-center/dgx-h100/) systems.
86
 
87
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
88
- - **Paper:** [Link](https://drive.google.com/file/d/17iGn3c-sYNiLyRSY-A85QOzgzGnGiVI3/view?usp=sharing)
89
  - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
90
  - **Languages:** 600+ Programming languages
91
 
@@ -174,7 +174,7 @@ The pretraining dataset of the model was filtered for permissive licenses and co
174
 
175
  # Limitations
176
 
177
- The model has been trained on source code from 600+ programming languages. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/17iGn3c-sYNiLyRSY-A85QOzgzGnGiVI3/view?usp=sharing) for an in-depth discussion of the model limitations.
178
 
179
  # Training
180
 
@@ -200,4 +200,13 @@ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can
200
 
201
  # Citation
202
 
203
- _Coming soon_
 
 
 
 
 
 
 
 
 
 
85
  The model was trained with [NVIDIA NeMo™ Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/) using the [NVIDIA Eos Supercomputer](https://blogs.nvidia.com/blog/eos/) built with [NVIDIA DGX H100](https://www.nvidia.com/en-us/data-center/dgx-h100/) systems.
86
 
87
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
88
+ - **Paper:** [Link](https://huggingface.co/papers/2402.19173)
89
  - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
90
  - **Languages:** 600+ Programming languages
91
 
 
174
 
175
  # Limitations
176
 
177
+ The model has been trained on source code from 600+ programming languages. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://huggingface.co/papers/2402.19173) for an in-depth discussion of the model limitations.
178
 
179
  # Training
180
 
 
200
 
201
  # Citation
202
 
203
+ ```bash
204
+ @misc{lozhkov2024starcoder,
205
+ title={StarCoder 2 and The Stack v2: The Next Generation},
206
+ author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
207
+ year={2024},
208
+ eprint={2402.19173},
209
+ archivePrefix={arXiv},
210
+ primaryClass={cs.SE}
211
+ }
212
+ ```