phatvo commited on
Commit
7d6e7dd
1 Parent(s): c72f155

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -31,4 +31,38 @@ Evaluated on FULL validation set of HotpotQA.
31
  | pretrained | 0.2980 | 0.3979 | 0.4116 | 0.5263 |
32
  | finetuned | 0.3606 | **0.4857** | 0.4989 | 0.5318 |
33
 
34
- Finetuned version increases **22% on F1 and 15% on average**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  | pretrained | 0.2980 | 0.3979 | 0.4116 | 0.5263 |
32
  | finetuned | 0.3606 | **0.4857** | 0.4989 | 0.5318 |
33
 
34
+ Finetuned version increases **22% on F1 and 15% on average**
35
+
36
+ ### Usage
37
+
38
+ ```python
39
+ import torch
40
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
41
+
42
+ model_id = "phatvo/Meta-Llama3.1-8B-Instruct-RAFT"
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model_id, device_map="auto", revision="main", trust_remote_code=True)
47
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
48
+
49
+ inst = "Given the question and context below, thinking in logical reasoning way for your answer.\
50
+ Please provide only your answer in this format: CoT Answer: {reason} <ANSWER>: {answer}."
51
+ context = ""
52
+ question = ""
53
+ prompt = f"{context}\n{question}"
54
+
55
+ chat = [
56
+ {"role": "system", "content": inst},
57
+ {"role": "user", "content": prompt},
58
+ ]
59
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False)
60
+
61
+ output = pipe(prompt,
62
+ temperature=0.001,
63
+ max_new_tokens=1024, # recommended to set it more than 800
64
+ return_full_text=False,
65
+ do_sample=True)
66
+
67
+ print(output[0]["generated_text"])
68
+ ```