Text Classification
Transformers
PyTorch
English
electra
reward-model
reward_model
RLHF
Inference Endpoints
theblackcat102 commited on
Commit
0bde796
1 Parent(s): cfc0dba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -36,7 +36,7 @@ All models are train on these dataset with a same split seed across datasets (if
36
 
37
  ```
38
  from transformer import AutoModelForSequenceClassification, AutoTokenizer
39
- reward_name = "OpenAssistant/reward-model-deberta-v3-large"
40
  rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
41
  question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
42
  inputs = tokenizer(question, answer, return_tensors='pt')
 
36
 
37
  ```
38
  from transformer import AutoModelForSequenceClassification, AutoTokenizer
39
+ reward_name = "OpenAssistant/reward-model-electra-large-discriminator"
40
  rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
41
  question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
42
  inputs = tokenizer(question, answer, return_tensors='pt')