Text Generation
Transformers
Spanish
English
alpaca
bloom
LLM
mrm8488 commited on
Commit
ce5864b
1 Parent(s): 5e0bc56

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -0
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bigscience-bloom-rail-1.0
3
+ language:
4
+ - es
5
+ - en
6
+ pipeline_tag: text-generation
7
+ library_name: transformers
8
+ tags:
9
+ - alpaca
10
+ - bloom
11
+ - LLM
12
+ datasets:
13
+ - tatsu-lab/alpaca
14
+ inference: false
15
+ widget:
16
+ - text: "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n### Instruction:\nTell me about alpacas"
17
+ ---
18
+
19
+ <div style="text-align:center;width:250px;height:250px;">
20
+ <img src="here_our_logo">
21
+ </div>
22
+
23
+
24
+
25
+ # Chivoom: Spanish Alpaca (Chiva) 🐐 + BLOOM 💮
26
+
27
+
28
+ ## Adapter Description
29
+ This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOM 7B1** to be fine-tuned on the **Stanford's Alpaca Dataset** (translated to Spanish) by using the method **LoRA**.
30
+
31
+ ## Model Description
32
+ BigScience Large Open-science Open-access Multilingual Language Model
33
+
34
+ [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
35
+
36
+ ## Training data
37
+
38
+ We translated to Spanish the Alpaca dataset.
39
+
40
+ Alpaca is a dataset of **52,000** instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
41
+
42
+ The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
43
+
44
+ - The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
45
+ - A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
46
+ - Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
47
+ - The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
48
+ - Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
49
+
50
+ This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
51
+ In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
52
+
53
+ ### Training procedure
54
+
55
+ TBA
56
+
57
+ ## How to use
58
+ ```py
59
+ import torch
60
+ from peft import PeftModel, PeftConfig
61
+ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
62
+
63
+ peft_model_id = "platzi/chivoom"
64
+ config = PeftConfig.from_pretrained(peft_model_id)
65
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map="auto")
66
+ tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-7b1")
67
+
68
+ model = PeftModel.from_pretrained(model, peft_model_id)
69
+ model.eval()
70
+
71
+ # Based on the inference code by `tloen/alpaca-lora`
72
+ def generate_prompt(instruction, input=None):
73
+ if input:
74
+ return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
75
+ ### Instruction:
76
+ {instruction}
77
+ ### Input:
78
+ {input}
79
+ ### Response:"""
80
+ else:
81
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
82
+ ### Instruction:
83
+ {instruction}
84
+ ### Response:"""
85
+
86
+ def generate(
87
+ instruction,
88
+ input=None,
89
+ temperature=0.1,
90
+ top_p=0.75,
91
+ top_k=40,
92
+ num_beams=4,
93
+ **kwargs,
94
+ ):
95
+ prompt = generate_prompt(instruction, input)
96
+ inputs = tokenizer(prompt, return_tensors="pt")
97
+ input_ids = inputs["input_ids"].cuda()
98
+ generation_config = GenerationConfig(
99
+ temperature=temperature,
100
+ top_p=top_p,
101
+ top_k=top_k,
102
+ num_beams=num_beams,
103
+ **kwargs,
104
+ )
105
+ with torch.no_grad():
106
+ generation_output = model.generate(
107
+ input_ids=input_ids,
108
+ generation_config=generation_config,
109
+ return_dict_in_generate=True,
110
+ output_scores=True,
111
+ max_new_tokens=256,
112
+ )
113
+ s = generation_output.sequences[0]
114
+ output = tokenizer.decode(s)
115
+ return output.split("### Response:")[1].strip().split("Below")[0]
116
+
117
+ instruction = "¿Qué es un chivo?"
118
+
119
+ print("Instruction:", instruction)
120
+ print("Response:", generate(instruction))
121
+ ``