Text Generation
Transformers
PyTorch
TensorBoard
English
llama
llama-2
code
Eval Results
Inference Endpoints
text-generation-inference

What prompt template(s) do your models use?

#1
by TheBloke - opened

Is there a suggested way to prompt this model, and your Mistral ones?

Or is it purely code completion?

Thanks

There are no additional special prompt templates. The instruction data from the 30K WizardLM dataset, 40K other Python Evol-Instruct data , and the inference instruction data from Airoboros and Orca were used for model fine-tuning. The HumanEval score improved from 37.8 in the original model to 51.8. For code generation, you can refer to HumanEval.

Sign up or log in to comment