File size: 2,037 Bytes
aebab2a
b133074
 
 
 
aebab2a
b133074
 
 
 
 
aebab2a
 
b133074
aebab2a
de0be44
b133074
aebab2a
b133074
 
aebab2a
 
b133074
aebab2a
b133074
aebab2a
b133074
aebab2a
b133074
aebab2a
b133074
 
 
aebab2a
b133074
 
 
 
 
 
 
aebab2a
b133074
 
 
 
 
 
aebab2a
b133074
 
 
 
 
 
 
 
 
 
 
aebab2a
b133074
aebab2a
b133074
 
aebab2a
b133074
 
 
 
aebab2a
b133074
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
language:
- fr
- en
license: apache-2.0
library_name: transformers
tags:
- chocolatine
datasets:
- jpacifico/french-orca-dpo-pairs-revised
pipeline_tag: text-generation
---

### Chocolatine-78B-Instruct-DPO-v1.3

DPO fine-tuned of [dfurman/CalmeRys-78B-Orpo-v0.1](https://huggingface.co/dfurman/CalmeRys-78B-Orpo-v0.1) itself based on multiple fine tunings; initialy based on the foundation model [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct)  
using the [jpacifico/french-orca-dpo-pairs-revised](https://huggingface.co/datasets/jpacifico/french-orca-dpo-pairs-revised) rlhf dataset.  

My goal here is to verify whether the French DPO fine-tuning I developed for my Chocolatine model series can be applied with equal performance to model sizes > 70B params,  
especially if it can be combined with several previous fine-tunings.  


### OpenLLM Leaderboard

Coming soon.  

### Usage

You can run Chocolatine using the following code:

```python
import transformers
from transformers import AutoTokenizer

# Format prompt
message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=new_model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])
```

### Limitations

The Chocolatine model series is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.  
It does not have any moderation mechanism.  

- **Developed by:** Jonathan Pacifico, 2024
- **Model type:** LLM 
- **Language(s) (NLP):** French, English
- **License:** Apache 2.0

Made with ❤️ in France