Text Generation
Transformers
PyTorch
Safetensors
English
gpt_neox
causal-lm
pythia
text-generation-inference
Inference Endpoints
Iskan123 commited on
Commit
eae42f7
1 Parent(s): 7199d8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -123,9 +123,7 @@ Learning from Human Feedback (RLHF) to better “follow” human instructions.
123
 
124
  The core functionality of a large language model is to take a string of text
125
  and predict the next token. The token used by the model need not produce the
126
- most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
127
- output.
128
-
129
  This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
130
  known to contain profanity and texts that are lewd or otherwise offensive.
131
  See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
 
123
 
124
  The core functionality of a large language model is to take a string of text
125
  and predict the next token. The token used by the model need not produce the
126
+ most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accu
 
 
127
  This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
128
  known to contain profanity and texts that are lewd or otherwise offensive.
129
  See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a