--- license: bigcode-openrail-m datasets: - bigcode/guanaco-commits metrics: - code_eval library_name: peft tags: - code --- # Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models
# Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Training](#training) 4. [Citation](#citation) # Model Summary > Astraios-3B-AdapterH is an instruction tuned model with 15.5B parameters created by finetuning StarCoderBase on CommitPackFT & OASST as described in the Astraios paper. - **Repository:** [bigcode-project/astraios](https://github.com/bigcode-project/astraios) - **Paper:** [Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models]() - **Languages:** 80+ Programming languages - **✨Astraios:**Data | CommitPackFT+OASST | Filtered version of CommitPack and OASST for high-quality commit messages that resemble instructions |
---|---|---|
Model | Astraios-1B | Collection of StarCoderBase-1B models instruction tuned on CommitPackFT + OASST with different tuning methods |
Astraios-3B | Collection of StarCoderBase-3B (3B parameters) models instruction tuned on CommitPackFT + OASST with different tuning methods | |
Astraios-7B | Collection of StarCoderBase-7B (7B parameters) models instruction tuned on CommitPackFT + OASST with different tuning methods | |
Astraios-16B | Collection of StarCoderBase-16B (16B parameters) models instruction tuned on CommitPackFT + OASST with different tuning methods | |
Evaluation | BigCloneBench | Dataset for clone detection; We use 2,000 samples for evaluation |
Devign | Dataset for defect detection; We use 2,000 samples for evaluation | |
HumanEvalPack | Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages | |
ReCode | Dataset for the robustness of code generation, covering 4 variants | |
Asleep At The Keyboard | Datasets for security of code generation; We use DoW for evaluation |