Update README.md
Browse files
README.md
CHANGED
@@ -34,13 +34,26 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
|
|
34 |
* **Model type:** **llama-2-7b-hf_open-platypus** is an auto-regressive language model based on the LLaMA2 transformer architecture.
|
35 |
* **Language(s)**: English
|
36 |
|
37 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
```
|
39 |
-
### Instruction:
|
40 |
|
41 |
-
|
|
|
|
|
|
|
|
|
42 |
|
43 |
-
|
|
|
44 |
```
|
45 |
|
46 |
### Training Dataset
|
|
|
34 |
* **Model type:** **llama-2-7b-hf_open-platypus** is an auto-regressive language model based on the LLaMA2 transformer architecture.
|
35 |
* **Language(s)**: English
|
36 |
|
37 |
+
### How to use:
|
38 |
+
|
39 |
+
```python
|
40 |
+
# Use a pipeline as a high-level helper
|
41 |
+
>>> from transformers import pipeline
|
42 |
+
>>> pipe = pipeline("text-generation", model="lgaalves/llama-2-7b-hf_open-platypus")
|
43 |
+
>>> question = "What is a large language model?"
|
44 |
+
>>> answer = pipe(question)
|
45 |
+
>>> print(answer[0]['generated_text'])
|
46 |
+
|
47 |
```
|
|
|
48 |
|
49 |
+
or, you can load the model direclty using:
|
50 |
+
|
51 |
+
```python
|
52 |
+
# Load model directly
|
53 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
54 |
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained("lgaalves/llama-2-7b-hf_open-platypus")
|
56 |
+
model = AutoModelForCausalLM.from_pretrained("lgaalves/llama-2-7b-hf_open-platypus")
|
57 |
```
|
58 |
|
59 |
### Training Dataset
|