|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- iamtarun/python_code_instructions_18k_alpaca |
|
- angie-chen55/python-github-code |
|
- jtatman/python-code-dataset-500k |
|
language: |
|
- en |
|
base_model: |
|
- facebook/layerskip-llama3.2-1B |
|
library_name: transformers |
|
tags: |
|
- code |
|
--- |
|
# Model Card for CodeDrafter-500M |
|
|
|
A draft model for Llama3.1/3.2/3.3 series models, specialized in python coding. This model is finetuned from the first 4 layers of facebook/layerskip-llama3.2-1B. |
|
|
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{chen2024sequoia, |
|
title={Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding}, |
|
author={Chen, Zhuoming and May, Avner and Svirschevski, Ruslan and Huang, Yuhsun and Ryabinin, Max and Jia, Zhihao and Chen, Beidi}, |
|
journal={arXiv preprint arXiv:2402.12374}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
|
|
|
|
|