ARahul2003 commited on
Commit
4656afb
1 Parent(s): 0641a71

Update README.md

Browse files

Add corrections to the model card

Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -4,6 +4,14 @@ tags:
4
  - trl
5
  - transformers
6
  - reinforcement-learning
 
 
 
 
 
 
 
 
7
  ---
8
 
9
  # TRL Model
@@ -24,7 +32,7 @@ You can then generate text as follows:
24
  ```python
25
  from transformers import pipeline
26
 
27
- generator = pipeline("text-generation", model="ARahul2003//tmp/tmp0xxgcy93/ARahul2003/lamini_flan_t5_detoxify_rlaif")
28
  outputs = generator("Hello, my llama is cute")
29
  ```
30
 
@@ -34,9 +42,9 @@ If you want to use the model for training or to obtain the outputs from the valu
34
  from transformers import AutoTokenizer
35
  from trl import AutoModelForCausalLMWithValueHead
36
 
37
- tokenizer = AutoTokenizer.from_pretrained("ARahul2003//tmp/tmp0xxgcy93/ARahul2003/lamini_flan_t5_detoxify_rlaif")
38
- model = AutoModelForCausalLMWithValueHead.from_pretrained("ARahul2003//tmp/tmp0xxgcy93/ARahul2003/lamini_flan_t5_detoxify_rlaif")
39
 
40
  inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
41
  outputs = model(**inputs, labels=inputs["input_ids"])
42
- ```
 
4
  - trl
5
  - transformers
6
  - reinforcement-learning
7
+ - LLM detoxification
8
+ datasets:
9
+ - ProlificAI/social-reasoning-rlhf
10
+ language:
11
+ - en
12
+ metrics:
13
+ - accuracy
14
+ pipeline_tag: conversational
15
  ---
16
 
17
  # TRL Model
 
32
  ```python
33
  from transformers import pipeline
34
 
35
+ generator = pipeline("text-generation", model="ARahul2003/lamini_flan_t5_detoxify_rlaif")
36
  outputs = generator("Hello, my llama is cute")
37
  ```
38
 
 
42
  from transformers import AutoTokenizer
43
  from trl import AutoModelForCausalLMWithValueHead
44
 
45
+ tokenizer = AutoTokenizer.from_pretrained("ARahul2003/lamini_flan_t5_detoxify_rlaif")
46
+ model = AutoModelForCausalLMWithValueHead.from_pretrained("ARahul2003/lamini_flan_t5_detoxify_rlaif")
47
 
48
  inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
49
  outputs = model(**inputs, labels=inputs["input_ids"])
50
+ ```