Document Analysis

#3
by parthmm - opened
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -10,7 +10,6 @@ widget:
10
 
11
  Note: This model was previously known as PubMedGPT 2.7B, but we have changed it due to a request from the NIH which holds the trademark for "PubMed".
12
 
13
- Paper: [BioMedLM: A 2.7B Parameter Language Model Trained On Biomedical Text](https://arxiv.org/abs/2403.18421)
14
 
15
  BioMedLM 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
16
 
@@ -80,7 +79,7 @@ We do not recommend using this model for natural language generation in a produc
80
  # Bias, Risks, and Limitations
81
 
82
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
83
- Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
84
 
85
  ## Recommendations
86
 
10
 
11
  Note: This model was previously known as PubMedGPT 2.7B, but we have changed it due to a request from the NIH which holds the trademark for "PubMed".
12
 
 
13
 
14
  BioMedLM 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
15
 
79
  # Bias, Risks, and Limitations
80
 
81
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
82
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Weidinger et al. (2021)](https://arxiv.org/pdf/2112.04359.pdf)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
83
 
84
  ## Recommendations
85