Update README.md
Browse files
README.md
CHANGED
@@ -34,13 +34,28 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
|
|
34 |
* **Model type:** **GPT-2-dolly** is an auto-regressive language model based on the GPT-2 transformer architecture.
|
35 |
* **Language(s)**: English
|
36 |
|
37 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
```
|
39 |
-
### Instruction:
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
|
|
|
|
|
|
|
|
|
|
44 |
```
|
45 |
|
46 |
### Training Dataset
|
|
|
34 |
* **Model type:** **GPT-2-dolly** is an auto-regressive language model based on the GPT-2 transformer architecture.
|
35 |
* **Language(s)**: English
|
36 |
|
37 |
+
### How to use:
|
38 |
+
|
39 |
+
```python
|
40 |
+
# Use a pipeline as a high-level helper
|
41 |
+
>>> from transformers import pipeline
|
42 |
+
>>> pipe = pipeline("text-generation", model="lgaalves/gpt2-dolly")
|
43 |
+
>>> question = "What is a large language model?"
|
44 |
+
>>> answer = pipe(question)
|
45 |
+
>>> print(answer[0]['generated_text'])
|
46 |
+
What is a large language model?
|
47 |
+
A large language model aims for understanding a large group of phenomena through computational methods which allow more precise models.
|
48 |
+
A model also encourages the use of empirical concepts such as equations, models, natural numbers, natural language
|
49 |
```
|
|
|
50 |
|
51 |
+
or, you can load the model direclty using:
|
52 |
|
53 |
+
```python
|
54 |
+
# Load model directly
|
55 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
56 |
+
|
57 |
+
tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2-dolly")
|
58 |
+
model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2-dolly")
|
59 |
```
|
60 |
|
61 |
### Training Dataset
|