Summarization
PEFT
Safetensors
Ukrainian
dpo
SGaleshchuk commited on
Commit
6712069
1 Parent(s): 7dc3342

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -6
README.md CHANGED
@@ -35,12 +35,35 @@ This model is a fine-tuned version of [SGaleshchuk/Llama-2-13b-hf_uk_rank-32_ft]
35
 
36
 
37
  ## Intended uses & limitations
38
-
39
- More information needed
40
-
41
- ## Training and evaluation data
42
-
43
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
  ## Training procedure
46
 
 
35
 
36
 
37
  ## Intended uses & limitations
38
+ ```python
39
+ # unpatch flash attention
40
+ from peft import AutoPeftModelForCausalLM
41
+ from transformers import AutoTokenizer
42
+
43
+ # load base LLM model and tokenizer
44
+ model = AutoPeftModelForCausalLM.from_pretrained(
45
+ "SGaleshchuk/Llama-2-13b-summarization_uk_dpo",
46
+ low_cpu_mem_usage=True,
47
+ torch_dtype=torch.float16,
48
+ load_in_4bit=True)
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
51
+
52
+ for instruct, summary in zip(val_instructions, tqdm(summaries)):
53
+ input_ids = tokenizer(
54
+ instruct, return_tensors="pt", truncation=True).input_ids.cuda()
55
+ with torch.inference_mode():
56
+ outputs = model.generate(
57
+ input_ids=input_ids,
58
+ max_new_tokens=128,
59
+ do_sample=True,
60
+ top_p=0.9,
61
+ temperature=1e-2,
62
+ )
63
+ result = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
64
+ result = result[len(instruct) :]
65
+ print(result)
66
+ ```
67
 
68
  ## Training procedure
69