File size: 1,993 Bytes
8842b95
 
 
efa8462
8842b95
 
 
 
 
 
 
0821786
8842b95
579c19d
 
8842b95
 
 
 
 
 
 
 
 
 
b94d860
 
 
 
 
 
 
 
 
 
 
 
 
 
03672a7
b94d860
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
efa8462
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language:
- en
- fr
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-v0.3
datasets:
- jpacifico/French-Alpaca-dataset-Instruct-110K
---

# Uploaded  model

- **Developed by:** AdrienB134
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3

This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

# How to use

```python
from unsloth import FastLanguageModel
import torch

max_seq_length = 32_768 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be True.


model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "AdrienB134/French-Alpaca-Mistral-7B-v0.3", 
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)

alpaca_prompt = """Ci-dessous tu trouveras une instruction qui décrit une tâche, accompagnée d'un contexte qui donne plus d'informations. Ecrit une réponse appropriée à l'instruction.
### Instruction:
{}

### Contexte:
{}

### Response:
{}"""

FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        "Continue la série de fibonacci.", # instruction
        "1, 1, 2, 3, 5, 8", # contexte
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
```