Edit model card

Open-Assistant CodeLlama 13B SFT v10

This model is an Open-Assistant fine-tuning of Meta's CodeLlama 13B LLM.

Note: Due to the new RoPE Theta value (1e6 instead of 1e4), for correct results you must load this model with trust_remote_code=True or use the latest main branch of Huggingface transformers (until version 4.33 is released).

Model Details

Prompting / Prompt Template

Due to public demand (see survey) we changed the prompt-template for this model from custom prompter/assistant tokens to OpenAI's chatml standard prompt format. We hope that this leads to greater compatibility with chat inference/frontend applications.

Prompt dialogue template:


The model input can contain multiple conversation turns between user and assistant, e.g.

{prompt 1}<|im_end|>
{reply 1}<|im_end|>
{prompt 2}<|im_end|>

The model was partly trained with orca system messages.
For inference we recommend to use the official Llama2 system message:

You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.

If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.

Credits & Special Thanks

Ethical Considerations and Limitations

Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, the potential outputs of codellama-13b-oasst-sft-v10 cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of codellama-13b-oasst-sft-v10, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see Meta's Responsible Use Guide.

Configuration Details

The "pretokenizer" utility used to tokenize the datamix is part of the Open-Assistant github repository and can be found here: model/pretokenizer.

Pretokenizer Configuration

    - orca-chat:
        val_split: 0.01
        max_val_set: 1000
    - bestofmegacode:
        val_split: 0.01
        max_val_set: 1000
    - oasst_export:
        lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
        #hf_dataset_name: OpenAssistant/oasst1
        input_file_path: 2023-08-25_oasst_ready.jsonl.gz
        top_k: 1
        val_split: 0.025
  output_dir: "output/orca_megacode_oasst_best"
  filename_prefix: "orca_megacode_oasst_best"
  min_assistant_tokens: 1
Downloads last month
Model size
13B params
Tensor type

Datasets used to train OpenAssistant/codellama-13b-oasst-sft-v10

Spaces using OpenAssistant/codellama-13b-oasst-sft-v10 7