ARahul2003
commited on
Commit
•
2946e2f
1
Parent(s):
0185e3a
Update README.md
Browse filesUpdate the license
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
tags:
|
4 |
- trl
|
5 |
- transformers
|
@@ -9,8 +9,6 @@ datasets:
|
|
9 |
- ProlificAI/social-reasoning-rlhf
|
10 |
language:
|
11 |
- en
|
12 |
-
metrics:
|
13 |
-
- accuracy
|
14 |
pipeline_tag: conversational
|
15 |
---
|
16 |
|
@@ -83,7 +81,7 @@ outputs = model(**inputs, labels=inputs["input_ids"])
|
|
83 |
|
84 |
If you want to use the model for inference in a gradio app, consider the following code:
|
85 |
|
86 |
-
|
87 |
from trl import AutoModelForSeq2SeqLMWithValueHead
|
88 |
from transformers import pipeline, AutoTokenizer
|
89 |
import torch
|
@@ -123,6 +121,6 @@ gr.ChatInterface(
|
|
123 |
examples=examples
|
124 |
).launch()
|
125 |
|
126 |
-
|
127 |
|
128 |
Make sure to keep all the tensors on the same device (CPU/GPU).
|
|
|
1 |
---
|
2 |
+
license: cc
|
3 |
tags:
|
4 |
- trl
|
5 |
- transformers
|
|
|
9 |
- ProlificAI/social-reasoning-rlhf
|
10 |
language:
|
11 |
- en
|
|
|
|
|
12 |
pipeline_tag: conversational
|
13 |
---
|
14 |
|
|
|
81 |
|
82 |
If you want to use the model for inference in a gradio app, consider the following code:
|
83 |
|
84 |
+
```python
|
85 |
from trl import AutoModelForSeq2SeqLMWithValueHead
|
86 |
from transformers import pipeline, AutoTokenizer
|
87 |
import torch
|
|
|
121 |
examples=examples
|
122 |
).launch()
|
123 |
|
124 |
+
```
|
125 |
|
126 |
Make sure to keep all the tensors on the same device (CPU/GPU).
|