Commit
·
6488729
1
Parent(s):
1838184
Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,7 @@ tags:
|
|
6 |
- gpt
|
7 |
- llm
|
8 |
- large language model
|
|
|
9 |
inference: false
|
10 |
---
|
11 |
# Model Card
|
@@ -56,12 +57,6 @@ You can print a sample prompt after the preprocessing step to see how it is feed
|
|
56 |
print(generate_text.preprocess("What is thor service?")["prompt_text"])
|
57 |
```
|
58 |
|
59 |
-
```bash
|
60 |
-
<|prompt|>Why is drinking water so healthy?</s><|answer|>
|
61 |
-
```
|
62 |
-
|
63 |
-
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
|
64 |
-
|
65 |
```python
|
66 |
import torch
|
67 |
from h2oai_pipeline import H2OTextGenerationPipeline
|
|
|
6 |
- gpt
|
7 |
- llm
|
8 |
- large language model
|
9 |
+
- thor service
|
10 |
inference: false
|
11 |
---
|
12 |
# Model Card
|
|
|
57 |
print(generate_text.preprocess("What is thor service?")["prompt_text"])
|
58 |
```
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
```python
|
61 |
import torch
|
62 |
from h2oai_pipeline import H2OTextGenerationPipeline
|