mayacinka commited on
Commit
03359e7
1 Parent(s): d1334ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -11,9 +11,9 @@ base_model:
11
  license: apache-2.0
12
  ---
13
  ![thumbnail](thumb.webp)
14
- # Buttercup-7b-dpo-ties
15
 
16
- Buttercup-7b-dpo-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
17
  * [paulml/OGNO-7B](https://huggingface.co/paulml/OGNO-7B)
18
  * [bardsai/jaskier-7b-dpo-v4.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v4.3)
19
 
@@ -22,7 +22,7 @@ Buttercup-7b-dpo-ties is a merge of the following models using [LazyMergekit](ht
22
 
23
  | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
24
  |------------------------|--------:|-----:|----------:|-----:|-----------:|-----------:|------:|
25
- | mayacinka/Buttercup-7b-dpo-ties | 76.19 | 72.7 | 89.69| 64.5 | 77.17 | 84.77 | 68.92|
26
 
27
 
28
  ## 🧩 Configuration
@@ -55,7 +55,7 @@ from transformers import AutoTokenizer
55
  import transformers
56
  import torch
57
 
58
- model = "mayacinka/Buttercup-7b-dpo-ties"
59
  messages = [{"role": "user", "content": "What is a large language model?"}]
60
 
61
  tokenizer = AutoTokenizer.from_pretrained(model)
 
11
  license: apache-2.0
12
  ---
13
  ![thumbnail](thumb.webp)
14
+ # ramonda-7b-dpo-ties
15
 
16
+ ramonda-7b-dpo-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
17
  * [paulml/OGNO-7B](https://huggingface.co/paulml/OGNO-7B)
18
  * [bardsai/jaskier-7b-dpo-v4.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v4.3)
19
 
 
22
 
23
  | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
24
  |------------------------|--------:|-----:|----------:|-----:|-----------:|-----------:|------:|
25
+ | mayacinka/ramonda-7b-dpo-ties | 76.19 | 72.7 | 89.69| 64.5 | 77.17 | 84.77 | 68.92|
26
 
27
 
28
  ## 🧩 Configuration
 
55
  import transformers
56
  import torch
57
 
58
+ model = "mayacinka/ramonda-7b-dpo-ties"
59
  messages = [{"role": "user", "content": "What is a large language model?"}]
60
 
61
  tokenizer = AutoTokenizer.from_pretrained(model)