Text Generation
Transformers
PyTorch
English
hf_olmo
conversational
custom_code
natolambert commited on
Commit
c1ae476
1 Parent(s): d79d388

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -156,7 +156,7 @@ Compared to Tulu 2, DPO hyperparameters are the same. SFT is lower LR and 3 epoc
156
 
157
  ## Bias, Risks, and Limitations
158
 
159
- This adapted OLMo model is a research artifact, not a consumer product.
160
  It is intended to benefit the research community interested in understanding the safety properties of LLMs and developers building safety tools for LLMs.
161
  For this reason, the model does not include a specific safety filter or safety training data.
162
  While our model scores well relative to its peers on ToxiGen, it is possible for the model to generate harmful and sensitive content from some user prompts.
 
156
 
157
  ## Bias, Risks, and Limitations
158
 
159
+ This adapted OLMo model is a research artifact.
160
  It is intended to benefit the research community interested in understanding the safety properties of LLMs and developers building safety tools for LLMs.
161
  For this reason, the model does not include a specific safety filter or safety training data.
162
  While our model scores well relative to its peers on ToxiGen, it is possible for the model to generate harmful and sensitive content from some user prompts.