ravithejads commited on
Commit
6b0efd7
·
verified ·
1 Parent(s): 427929a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -2
README.md CHANGED
@@ -38,9 +38,73 @@ BRAG-Llama-3.1-8b-v0.1 is part of the BRAG series of SLMs (Small Language Models
38
  | BRAG SLM | BRAG-Llama-3-8b-v0.1 | 8b | 8k | 51.70 |
39
  | BRAG Ultra SLM | BRAG-Qwen2-1.5b-v0.1 | 1.5b | 32k | 46.43 |
40
 
41
- ## How to Use
42
 
43
- [Include information on how to use the model]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
  ## Use Cases
46
 
 
38
  | BRAG SLM | BRAG-Llama-3-8b-v0.1 | 8b | 8k | 51.70 |
39
  | BRAG Ultra SLM | BRAG-Qwen2-1.5b-v0.1 | 1.5b | 32k | 46.43 |
40
 
41
+ ## Usage
42
 
43
+ #### Prompt Format
44
+
45
+ Below is the message prompt format required for using the model.
46
+
47
+ ```
48
+ messages = [
49
+ {"role": "system", "content": "You are a helpful assistant to answer the queries based on the given context."},
50
+ {"role": "user", "content": """Context: <CONTEXT INFORMATION> \n\n <USER QUERY>"""},
51
+ ]
52
+
53
+ ```
54
+
55
+ #### Running with the `pipeline` API
56
+
57
+ ```python
58
+ import transformers
59
+ import torch
60
+
61
+ model_id = "maximalists/BRAG-Llama-3.1-8b-v0.1"
62
+
63
+ pipeline = transformers.pipeline(
64
+ "text-generation",
65
+ model=model_id,
66
+ model_kwargs={"torch_dtype": torch.bfloat16},
67
+ device_map="auto",
68
+ )
69
+
70
+ messages = [
71
+ {"role": "system", "content": "You are a helpful assistant to answer the queries based on the given context."},
72
+ {"role": "user", "content": """Context:\nArchitecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.\n\nTo whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?"""},
73
+ ]
74
+
75
+ outputs = pipeline(
76
+ messages,
77
+ max_new_tokens=256,
78
+ )
79
+
80
+ print(outputs[0]["generated_text"][-1])
81
+ ```
82
+
83
+ #### Running the model on a single / multi GPU
84
+
85
+ ```python
86
+ # pip install accelerate
87
+ from transformers import AutoTokenizer, AutoModelForCausalLM
88
+ import torch
89
+
90
+ model_id = "maximalists/BRAG-Llama-3.1-8b-v0.1"
91
+
92
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
93
+ model = AutoModelForCausalLM.from_pretrained(
94
+ model_id,
95
+ device_map="auto",
96
+ )
97
+
98
+ messages = [
99
+ {"role": "system", "content": "You are a helpful assistant to answer the queries based on the given context."},
100
+ {"role": "user", "content": """Context:\nArchitecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.\n\nTo whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?"""},
101
+ ]
102
+
103
+ input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
104
+
105
+ outputs = model.generate(**input_ids, max_new_tokens=256)
106
+ print(tokenizer.decode(outputs[0]))
107
+ ```
108
 
109
  ## Use Cases
110