Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,26 @@ Tulpar-7b is a LLama2-7b-based model trained by Hyperbee.ai. Training is done on
|
|
7 |
|
8 |
# Example Usage
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
|
12 |
# Evaluation
|
|
|
7 |
|
8 |
# Example Usage
|
9 |
|
10 |
+
Loading the model:
|
11 |
+
```python
|
12 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
13 |
+
|
14 |
+
tokenizer = AutoTokenizer.from_pretrained("HyperbeeAI/Tulpar-7b-v0")
|
15 |
+
model = AutoModelForCausalLM.from_pretrained("HyperbeeAI/Tulpar-7b-v0")
|
16 |
+
```
|
17 |
+
|
18 |
+
You can run inference with both of the following prompts:
|
19 |
+
```python
|
20 |
+
prompt = f"### User: {input_text}\n\n### Assistant:\n"
|
21 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
22 |
+
output = model.generate(**inputs, **generation_config)
|
23 |
+
```
|
24 |
+
|
25 |
+
```python
|
26 |
+
prompt = f"Question: {input_text}\n\nAnswer:"
|
27 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
28 |
+
output = model.generate(**inputs, **generation_config)
|
29 |
+
```
|
30 |
|
31 |
|
32 |
# Evaluation
|