alea-institute
commited on
Commit
•
cc3f8cb
1
Parent(s):
27b2af9
Update README.md
Browse files
README.md
CHANGED
@@ -120,7 +120,7 @@ data curation, model training, and model evaluation.
|
|
120 |
|
121 |
## Generation
|
122 |
|
123 |
-
You can generate your own examples as follows
|
124 |
|
125 |
```python
|
126 |
import json
|
@@ -130,23 +130,22 @@ from transformers import pipeline
|
|
130 |
p = pipeline('text-generation', 'alea-institute/kl3m-002-170m-patent', device='cpu')
|
131 |
|
132 |
# Example usage on CPU
|
133 |
-
text = "#
|
134 |
print(
|
135 |
json.dumps(
|
136 |
[
|
137 |
r.get("generated_text")
|
138 |
-
for r in p(text, do_sample=True, temperature=0.5, num_return_sequences=3, max_new_tokens=
|
139 |
],
|
140 |
indent=2
|
141 |
)
|
142 |
)
|
143 |
```
|
144 |
|
145 |
-
```json
|
146 |
[
|
147 |
-
"#
|
148 |
-
"# Title\
|
149 |
-
"# Title\
|
150 |
]
|
151 |
```
|
152 |
|
|
|
120 |
|
121 |
## Generation
|
122 |
|
123 |
+
You can generate your own examples as follows. For a "complete" patent, you'll want to extend the `max_new_tokens` value to the biggest number you can fit in your available VRAM.
|
124 |
|
125 |
```python
|
126 |
import json
|
|
|
130 |
p = pipeline('text-generation', 'alea-institute/kl3m-002-170m-patent', device='cpu')
|
131 |
|
132 |
# Example usage on CPU
|
133 |
+
text = "# Patent\n\n## Title"
|
134 |
print(
|
135 |
json.dumps(
|
136 |
[
|
137 |
r.get("generated_text")
|
138 |
+
for r in p(text, do_sample=True, temperature=0.5, num_return_sequences=3, max_new_tokens=32)
|
139 |
],
|
140 |
indent=2
|
141 |
)
|
142 |
)
|
143 |
```
|
144 |
|
|
|
145 |
[
|
146 |
+
"# Patent\n\n## Title\nMethod for manufacturing a temperature-controllable polyurethane composition and method",
|
147 |
+
"# Patent\n\n## Title\nElectronic device\n\n## Abstract\nAn electronic device includes a display panel and a",
|
148 |
+
"# Patent\n\n## Title\nMethods and devices for tissue repair using a neural network\n\n## Abstract"
|
149 |
]
|
150 |
```
|
151 |
|