Manually set `pipeline_tag` in model metadata

#1
by mattupson - opened
Files changed (1) hide show
  1. README.md +44 -43
README.md CHANGED
@@ -1,44 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- # WellcomeBertMesh
6
-
7
- WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
8
-
9
- # Model description
10
-
11
- The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
12
-
13
- WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
14
-
15
- We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
16
-
17
- The model achieves 63% micro f1 with a 0.5 threshold for all labels.
18
-
19
- The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger
20
-
21
- # How to use
22
-
23
- ⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
24
-
25
- You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`.
26
-
27
- ```
28
- from transformers import AutoModel, AutoTokenizer
29
-
30
- tokenizer = AutoTokenizer.from_pretrained(
31
- "Wellcome/WellcomeBertMesh"
32
- )
33
- model = AutoModel.from_pretrained(
34
- "Wellcome/WellcomeBertMesh",
35
- trust_remote_code=True
36
- )
37
-
38
- text = "This grant is about malaria and not about HIV."
39
- inputs = tokenizer([text], padding="max_length")
40
- labels = model(**inputs, return_labels=True)
41
- print(labels)
42
- ```
43
-
 
44
  You can inspect the model code if you navigate to the files and see `model.py`.
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-classification
4
+ ---
5
+
6
+ # WellcomeBertMesh
7
+
8
+ WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
9
+
10
+ # Model description
11
+
12
+ The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
13
+
14
+ WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
15
+
16
+ We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
17
+
18
+ The model achieves 63% micro f1 with a 0.5 threshold for all labels.
19
+
20
+ The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger
21
+
22
+ # How to use
23
+
24
+ ⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
25
+
26
+ You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`.
27
+
28
+ ```
29
+ from transformers import AutoModel, AutoTokenizer
30
+
31
+ tokenizer = AutoTokenizer.from_pretrained(
32
+ "Wellcome/WellcomeBertMesh"
33
+ )
34
+ model = AutoModel.from_pretrained(
35
+ "Wellcome/WellcomeBertMesh",
36
+ trust_remote_code=True
37
+ )
38
+
39
+ text = "This grant is about malaria and not about HIV."
40
+ inputs = tokenizer([text], padding="max_length")
41
+ labels = model(**inputs, return_labels=True)
42
+ print(labels)
43
+ ```
44
+
45
  You can inspect the model code if you navigate to the files and see `model.py`.