J38 commited on
Commit
ebd1370
1 Parent(s): d3f5bcd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -6,17 +6,20 @@ widget:
6
  - text: 'Photosynthesis is'
7
  ---
8
 
9
- # Model Card for PubmedGPT 2.7B
10
 
11
- PubMedGPT 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
12
 
13
- As an autoregressive language model, PubMedGPT 2.7B is also capable of natural language generation. However, we have only begun to explore the generation capabilities and limitations of this model, and we emphasize that this model’s generation capabilities are for research purposes only and not suitable for production. In releasing this model, we hope to advance both the development of biomedical NLP applications and best practices for responsibly training and utilizing domain-specific language models; issues of reliability, truthfulness, and explainability are top of mind for us.
 
 
 
14
 
15
  This model was a joint collaboration of [Stanford CRFM](https://crfm.stanford.edu/) and [MosaicML](https://www.mosaicml.com/).
16
 
17
  # Table of Contents
18
 
19
- - [Model Card for PubmedGPT 2.7B](#model-card-for--model_id-)
20
  - [Table of Contents](#table-of-contents)
21
  - [Model Details](#model-details)
22
  - [Model Description](#model-description)
@@ -39,9 +42,9 @@ This model was a joint collaboration of [Stanford CRFM](https://crfm.stanford.ed
39
  ## Model Description
40
 
41
  <!-- Provide a longer summary of what this model is/does. -->
42
- PubMedGPT 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
43
 
44
- As an autoregressive language model, PubMedGPT 2.7B is also capable of natural language generation. However, we have only begun to explore the generation capabilities and limitations of this model, and we emphasize that this model’s generation capabilities are for research purposes only and not suitable for production. In releasing this model, we hope to advance both the development of biomedical NLP applications and best practices for responsibly training and utilizing domain-specific language models; issues of reliability, truthfulness, and explainability are top of mind for us.
45
 
46
  This model was a joint collaboration of [Stanford CRFM](https://crfm.stanford.edu/) and [MosaicML](https://www.mosaicml.com/).
47
 
 
6
  - text: 'Photosynthesis is'
7
  ---
8
 
9
+ # Model Card for BioMedLM 2.7B
10
 
11
+ Note: This model was previously known as PubMedGPT 2.7B, but we have changed it due to a request from the NIH which holds the trademark for "PubMed".
12
 
13
+
14
+ BioMedLM 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
15
+
16
+ As an autoregressive language model, BioMedLM 2.7B is also capable of natural language generation. However, we have only begun to explore the generation capabilities and limitations of this model, and we emphasize that this model’s generation capabilities are for research purposes only and not suitable for production. In releasing this model, we hope to advance both the development of biomedical NLP applications and best practices for responsibly training and utilizing domain-specific language models; issues of reliability, truthfulness, and explainability are top of mind for us.
17
 
18
  This model was a joint collaboration of [Stanford CRFM](https://crfm.stanford.edu/) and [MosaicML](https://www.mosaicml.com/).
19
 
20
  # Table of Contents
21
 
22
+ - [Model Card for BioMedLM 2.7B](#model-card-for--model_id-)
23
  - [Table of Contents](#table-of-contents)
24
  - [Model Details](#model-details)
25
  - [Model Description](#model-description)
 
42
  ## Model Description
43
 
44
  <!-- Provide a longer summary of what this model is/does. -->
45
+ BioMedLM 2.7B is new language model trained exclusively on biomedical abstracts and papers from [The Pile](https://pile.eleuther.ai/). This GPT-style model can achieve strong results on a variety of biomedical NLP tasks, including a new state of the art performance of 50.3% accuracy on the MedQA biomedical question answering task.
46
 
47
+ As an autoregressive language model, BioMedLM 2.7B is also capable of natural language generation. However, we have only begun to explore the generation capabilities and limitations of this model, and we emphasize that this model’s generation capabilities are for research purposes only and not suitable for production. In releasing this model, we hope to advance both the development of biomedical NLP applications and best practices for responsibly training and utilizing domain-specific language models; issues of reliability, truthfulness, and explainability are top of mind for us.
48
 
49
  This model was a joint collaboration of [Stanford CRFM](https://crfm.stanford.edu/) and [MosaicML](https://www.mosaicml.com/).
50