LoneStriker's picture
Upload folder using huggingface_hub
8f081e9 verified
metadata
license: apache-2.0
tags:
  - merge
  - mergekit
  - lazymergekit
  - fblgit/UNA-TheBeagle-7b-v1
  - argilla/distilabeled-Marcoro14-7B-slerp
  - dpo
  - rlhf

🐢 NeuralBeagle14-7B

Update 01/16/24: NeuralBeagle14-7B is (probably) the best 7B model you can find! πŸŽ‰

NeuralBeagle14-7B is a DPO fine-tune of mlabonne/Beagle14-7B using the argilla/distilabel-intel-orca-dpo-pairs preference dataset and my DPO notebook from this article.

It is based on a merge of the following models using LazyMergekit:

Thanks Argilla for providing the dataset and the training recipe here. πŸ’ͺ

You can try it out in this Space (GGUF Q4_K_M).

⚑ Quantized models

πŸ† Evaluation

Open LLM Leaderboard

NeuralBeagle14-7B ranks first on the Open LLM Leaderboard in the ~7B category.

It has the same average score as Beagle14-7B ("Show merges"), which could be due to might be due to an unlucky run. I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point, since this dataset or its original version are present in multiple models. I need to find more high-quality preference data for the next DPO merge.

Note that some models like udkai/Turdus and nfaheem/Marcoroni-7b-DPO-Merge are unfortunately contaminated on purpose (see the very high Winogrande score).

Nous

The evaluation was performed using LLM AutoEval on Nous suite. It is the best 7B model to date.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/NeuralBeagle14-7B πŸ“„ 60.25 46.06 76.77 70.32 47.86
mlabonne/Beagle14-7B πŸ“„ 59.4 44.38 76.53 69.44 47.25
mlabonne/NeuralDaredevil-7B πŸ“„ 59.39 45.23 76.2 67.61 48.52
argilla/distilabeled-Marcoro14-7B-slerp πŸ“„ 58.93 45.38 76.48 65.68 48.18
mlabonne/NeuralMarcoro14-7B πŸ“„ 58.4 44.59 76.17 65.94 46.9
openchat/openchat-3.5-0106 πŸ“„ 53.71 44.17 73.72 52.53 44.4
teknium/OpenHermes-2.5-Mistral-7B πŸ“„ 52.42 42.75 72.99 52.99 40.94

You can find the complete benchmark on YALL - Yet Another LLM Leaderboard.

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/NeuralBeagle14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Built with Distilabel