cdh commited on
Commit
f280ca8
1 Parent(s): 7de141b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -1,3 +1,54 @@
1
  ---
2
  license: openrail
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: openrail
3
+ language:
4
+ - it
5
+ datasets:
6
+ - teelinsan/camoscio
7
  ---
8
+
9
+ # ExtremITA Camoscio 7 bilion parameters
10
+ This is the base model trained on Italian instructions, a sibling of Alpaca.
11
+ It is based on [tellinsan/camoscio-7b-llama](https://huggingface.co/teelinsan/camoscio-7b-llama) adapters and the original LLaMA model, and it adds nothing new to [tellinsan/camoscio-7b-llama](https://huggingface.co/teelinsan/camoscio-7b-llama). Our version is the merged model with the adapters in order to obtain a more stable model that can be further fine-tuned, which we did for the [EVALITA 2023](https://www.evalita.it/campaigns/evalita-2023/) challenge.
12
+
13
+ # Usage
14
+ Checkout the github repository for more insights and codes: https://github.com/crux82/ExtremITA
15
+
16
+ ```python
17
+ from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
18
+ import torch
19
+
20
+ tokenizer = LLaMATokenizer.from_pretrained("yahma/llama-7b-hf")
21
+ model = LLaMAForCausalLM.from_pretrained(
22
+ "sag-uniroma2/extremITA-Camoscio-7b",
23
+ load_in_8bit=True,
24
+ device_map="auto",
25
+ )
26
+
27
+ generation_config = GenerationConfig(
28
+ temperature=0.2,
29
+ top_p=0.75,
30
+ top_k=40,
31
+ num_beams=4,
32
+ )
33
+
34
+ prompts = [
35
+ "Riassumi la storia di Pinocchio",
36
+ "Scrivi un programma che stampa i numeri da 1 a 100. Ma per i multipli di tre stampa 'Fizz' \
37
+ al posto del numero e per i multipli di cinque stampa 'Buzz'. Per i numeri che sono multipli \
38
+ sia di tre che di cinque stampa 'FizzBuzz'."
39
+ ]
40
+
41
+ inputs = tokenizer(prompts, return_tensors="pt", padding=True, truncation=True).to(model.device)
42
+
43
+ with torch.no_grad():
44
+ gen_outputs = model.generate(
45
+ **inputs,
46
+ generation_config=generation_config,
47
+ return_dict_in_generate=True,
48
+ output_scores=True,
49
+ )
50
+
51
+ for i in range(len(gen_outputs)):
52
+ output = tokenizer.decode(gen_outputs[i], skip_special_tokens=True)
53
+ print(output)
54
+ ```