bmarie4i commited on
Commit
6438152
1 Parent(s): 3b58e9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md CHANGED
@@ -1,3 +1,162 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ datasets:
4
+ - bertin-project/alpaca-spanish
5
+ language:
6
+ - es
7
+ inference: false
8
  ---
9
+
10
+
11
+ # Model Card for Model ID
12
+
13
+ This model is the Llama-2-13b-hf fine-tuned with an adapter on the Spanish Alpaca dataset.
14
+
15
+ ## Model Details
16
+
17
+ ### Model Description
18
+
19
+ This is a Spanish chat model fine-tuned on a Spanish instruction dataset.
20
+
21
+ The model expect a prompt containing the instruction, with an option to add an input (see examples below).
22
+
23
+
24
+
25
+ - **Developed by:** 4i Intelligent Insights
26
+ - **Model type:** Chat model
27
+ - **Language(s) (NLP):** Spanish
28
+ - **License:** cc-by-nc-4.0 (inhereted from the alpaca-spanish dataset),
29
+ - **Finetuned from model :** Llama 2 13B ([license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/))
30
+
31
+
32
+ ## Uses
33
+
34
+ The model is intended to be used directly without the need of further fine-tuning.
35
+
36
+
37
+ ## Bias, Risks, and Limitations
38
+
39
+ This model inherits the bias, risks, and limitations of its base model, Llama 2, and of the dataset used for fine-tuning.
40
+ Note that the Spanish Alpaca dataset was obtained by translating the original Alpaca dataset. It contains translation errors that may have negatively impacted the fine-tuning of the model.
41
+
42
+
43
+
44
+ ## How to Get Started with the Model
45
+
46
+ Use the code below to get started with the model for inference. The adapter was directly merged into the original Llama 2 model.
47
+
48
+
49
+ The following code sample uses 4-bit quantization, you may load the model without it if you have enough VRAM. We show results for hyperparameters that we found work well for this set of prompts.
50
+
51
+ ```py
52
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments, GenerationConfig
53
+ import torch
54
+ model_name = "4i-ai/Llama-2-13b-alpaca-es"
55
+
56
+
57
+ #Tokenizer
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
60
+
61
+ def create_and_prepare_model():
62
+ compute_dtype = getattr(torch, "float16")
63
+ bnb_config = BitsAndBytesConfig(
64
+ load_in_4bit=True,
65
+ bnb_4bit_quant_type="nf4",
66
+ bnb_4bit_compute_dtype=compute_dtype,
67
+ bnb_4bit_use_double_quant=True,
68
+ )
69
+ model = AutoModelForCausalLM.from_pretrained(
70
+ model_name, quantization_config=bnb_config, device_map={"": 0}
71
+ )
72
+ return model
73
+ model = create_and_prepare_model()
74
+
75
+ def generate(instruction, input=None):
76
+ #Format the prompt to look like the training data
77
+ if input is not None:
78
+ prompt = "### Instruction:\n"+instruction+"\n\n### Input:\n"+input+"\n\n### Response:\n"
79
+ else :
80
+ prompt = "### Instruction:\n"+instruction+"\n\n### Response:\n"
81
+
82
+
83
+ inputs = tokenizer(prompt, return_tensors="pt")
84
+ input_ids = inputs["input_ids"].cuda()
85
+
86
+ generation_output = model.generate(
87
+ input_ids=input_ids,
88
+ repetition_penalty=1.5,
89
+ generation_config=GenerationConfig(temperature=0.1, top_p=0.75, top_k=40, num_beams=20), #hyperparameters for generation
90
+ return_dict_in_generate=True,
91
+ output_scores=True,
92
+ max_new_tokens=150, #maximum tokens generated, increase if you want longer asnwer (up to 2048 - the length of the prompt), generation "looks" slower for longer response
93
+
94
+ )
95
+ for seq in generation_output.sequences:
96
+ output = tokenizer.decode(seq, skip_special_tokens=True)
97
+ print(output.split("### Response:")[1].strip())
98
+
99
+ generate("Háblame de la superconductividad.")
100
+ print("-----------")
101
+ generate("Encuentra la capital de España.")
102
+ print("-----------")
103
+ generate("Encuentra la capital de Portugal.")
104
+ print("-----------")
105
+ generate("Organiza los números dados en orden ascendente.", "2, 3, 0, 8, 4, 10")
106
+ print("-----------")
107
+ generate("Compila una lista de 5 estados de EE. UU. ubicados en el Oeste.")
108
+ print("-----------")
109
+ generate("Compila una lista de 2 estados de EE. UU. ubicados en el Oeste.")
110
+ print("-----------")
111
+ generate("Compila una lista de 10 estados de EE. UU. ubicados en el Este.")
112
+ print("-----------")
113
+ generate("¿Cuál es la color de una fresa?")
114
+ print("-----------")
115
+ generate("¿Cuál es la color de la siguiente fruta?", "fresa")
116
+ print("-----------")
117
+
118
+ ```
119
+
120
+ Expected output:
121
+
122
+ ```
123
+ La superconductividad es un fenómeno físico en el que los materiales pueden conducir corrientes eléctricas a bajas temperaturas sin pérdida de energía debido a la resistencia. Los materiales superconductores son capaces de conducir corrientes eléctricas a temperaturas mucho más bajas que los materiales normales. Esto se debe a que los electrones en los materiales superconductores se comportan de manera cooperativa, lo que les permite conducir corrientes eléctricas sin pérdida de energía. Los materiales superconductores tienen muchas aplicaciones
124
+ -----------
125
+ La capital de España es Madrid.
126
+ -----------
127
+ La capital de Portugal es Lisboa.
128
+ -----------
129
+ 0, 2, 3, 4, 8, 10
130
+ -----------
131
+ 1. California
132
+ 2. Oregón
133
+ 3. Washington
134
+ 4. Nevada
135
+ 5. Arizona
136
+ -----------
137
+ California y Washington.
138
+ -----------
139
+ 1. Maine
140
+ 2. Nuevo Hampshire
141
+ 3. Vermont
142
+ 4. Massachusetts
143
+ 5. Rhode Island
144
+ 6. Connecticut
145
+ 7. Nueva York
146
+ 8. Nueva Jersey
147
+ 9. Pensilvania
148
+ 10. Delaware
149
+ -----------
150
+ La color de una fresa es rosa.
151
+ -----------
152
+ Roja
153
+ -----------
154
+ ```
155
+
156
+
157
+
158
+
159
+
160
+ ## Contact Us
161
+ [4i.ai](https://4i.ai/) provides natural language processing solutions with dialog, vision and voice capabilities to deliver real-life multimodal human-machine conversations.
162
+ Please contact us at info@4i.ai