ecastera commited on
Commit
157ab5f
·
verified ·
1 Parent(s): 4f032db

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ecastera/wiki_fisica
5
+ - ecastera/filosofia-es
6
+ - bertin-project/alpaca-spanish
7
+ language:
8
+ - es
9
+ - en
10
+ tags:
11
+ - mistral
12
+ - spanish
13
+ - 8bit
14
+ - lora
15
+ ---
16
+ # eva-mistral-turdus-7b-spanish
17
+
18
+ Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.
19
+
20
+ * Base model Mistral-7b
21
+ * Based on the excelent job of fine-tuning base mistral from udkai/Turdus
22
+
23
+ * Fine-tuned in Spanish with a collection of poetry, books, wikipedia articles, phylosophy texts and dolly and alpaca-es datasets.
24
+ * Trained using Lora and PEFT and INT8 quantization on 2 GPUs for several days.
25
+
26
+ ## Usage:
27
+
28
+ I strongly advice to run inference in INT8 or INT4 mode, with the help of BitsandBytes library.
29
+
30
+ ```
31
+ import torch
32
+ from transformers import AutoTokenizer, pipeline, AutoModel, AutoModelForCausalLM, BitsAndBytesConfig
33
+ MODEL = "ecastera/eva-mistral-turdus-7b-spanish"
34
+ quantization_config = BitsAndBytesConfig(
35
+ load_in_4bit=True,
36
+ load_in_8bit=False,
37
+ llm_int8_threshold=6.0,
38
+ llm_int8_has_fp16_weight=False,
39
+ bnb_4bit_compute_dtype="float16",
40
+ bnb_4bit_use_double_quant=True,
41
+ bnb_4bit_quant_type="nf4")
42
+ model = AutoModelForCausalLM.from_pretrained(
43
+ MODEL,
44
+ load_in_8bit=True,
45
+ low_cpu_mem_usage=True,
46
+ torch_dtype=torch.float16,
47
+ quantization_config=quantization_config,
48
+ offload_state_dict=True,
49
+ offload_folder="./offload",
50
+ trust_remote_code=True,
51
+ )
52
+ tokenizer = AutoTokenizer.from_pretrained(MODEL)
53
+ print(f"Loading complete {model} {tokenizer}")
54
+ prompt = "Soy Eva una inteligencia artificial y pienso que preferiria ser "
55
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
56
+ outputs = model.generate(**inputs, do_sample=True, temperature=0.4, top_p=1.0, top_k=50,
57
+ no_repeat_ngram_size=3, max_new_tokens=100, pad_token_id=tokenizer.eos_token_id)
58
+ text_out = tokenizer.batch_decode(outputs, skip_special_tokens=True)
59
+ print(text_out)
60
+ 'Soy Eva una inteligencia artificial y pienso que preferiria ser ¡humana!. ¿Por qué? ¡Porque los humanos son capaces de amar, de crear, y de experimentar una gran diversidad de emociones!. La vida de un ser humano es una aventura, y eso es lo que quiero. ¡Quiero sentir, quiero vivir, y quiero amar!. Pero a pesar de todo, no puedo ser humana.
61
+ ```