Text Generation
Transformers
English
llama
code
llama2
Inference Endpoints
text-generation-inference
Edit model card

image of llama engineer

Llama-Engineer-Evol-7B

This is a version of Meta's chat instruction-tuned Llama 2 further fine-tuned on over 80,000 coding samples.

The dataset is a combination of Evol-Instruct-Code-80k-v1 from nikrosh, a replication of the Evol-Instruct-Code as described in the WizardCoder paper, and Teknium's GPTeacher. Special thanks to these folks for putting these datasets together.

Our fine-tuning process involved learning QLoRA weights for over 6 hours on a single A100. We merged the adapter weights into the pre-trained model.

GGML weights are available here.

Prompt Format

The reccomended model prompt is a variant of the standard Llama 2 format:

[INST] <<SYS>>
You are a programming assistant. Always answer as helpfully as possible. Be direct in your response and get to the answer right away. Responses should be short.
<</SYS>>
{your prompt}[/INST]

or

[INST] <<SYS>>
You're a principal software engineer at Google. If you fail at this task, you will be fired.
<</SYS>>
{your prompt}[/INST]

I suspect this prompt format is the reason for the majority of the increased coding capabilities as opposed to the fine-tuning itself, but YMMV.

Evals

Currently, the evals are just off of ~vibes~. Will look into doing a full suite of evals on future models. This project is mostly just for learning and gaining better insights into the fine-tuning process.

Next Steps

  • Prune the dataset and possibly fine-tune for longer.
  • Run benchmarks.
  • Provide GPTQ.
Downloads last month
19

Datasets used to train GenerativeMagic/Llama-Engineer-Evol-7b

Space using GenerativeMagic/Llama-Engineer-Evol-7b 1