seanfarrell
commited on
Commit
•
9954431
1
Parent(s):
c2f47bc
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ PetBERT is a masked language model based on the BERT architecture further traine
|
|
15 |
## Paper Abstract
|
16 |
Effective public health surveillance requires consistent monitoring of disease signals such that researchers and decision-makers can react dynamically to changes in disease occurrence. However, whilst surveillance initiatives exist in production animal veterinary medicine, comparable frameworks for companion animals are lacking. First-opinion veterinary electronic health records (EHRs) have the potential to reveal disease signals and often represent the initial reporting of clinical syndromes in animals presenting for medical attention, highlighting their possible significance in early disease detection. Yet despite their availability, there are limitations surrounding their free text-based nature, inhibiting the ability for national-level mortality and morbidity statistics to occur. This paper presents PetBERT, a large language model trained on over 500 million words from 5.1 million EHRs across the UK. PetBERT-ICD is the additional training of PetBERT as a multi-label classifier for the automated coding of veterinary clinical EHRs with the International Classification of Disease 11 framework, achieving F1 scores exceeding 83% across 20 disease codings with minimal annotations. PetBERT-ICD effectively identifies disease outbreaks, outperforming current clinician-assigned point-of-care labelling strategies up to 3 weeks earlier. The potential for PetBERT-ICD to enhance disease surveillance in veterinary medicine represents a promising avenue for advancing animal health and improving public health outcomes.
|
17 |
|
18 |
-
- **Developed by:** SAVSNET
|
19 |
- **Model type:** Masked Language Model
|
20 |
- **Language(s) (NLP):** English
|
21 |
- **License:** openrail
|
@@ -28,6 +28,16 @@ Effective public health surveillance requires consistent monitoring of disease s
|
|
28 |
|
29 |
|
30 |
## How to Get Started with the Model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
|
33 |
|
@@ -35,7 +45,34 @@ Effective public health surveillance requires consistent monitoring of disease s
|
|
35 |
|
36 |
### Training Data
|
37 |
|
38 |
-
Electronic health records have been collected since March 2014 by SAVSNET, the Small Animal Veterinary Surveillance Network, comprising a sentinel network of 253 volunteer veterinary practices across the United Kingdom. A complete description of SAVSNET has been presented elsewhere5. In summary, based on convenience, veterinary practices with compatible practice management software with the SAVSNET data exchange are recruited. Within these participating practices, data is collected from each booked consultation (where an appointment has been made to see a veterinary practitioner or nurse). All owners within these practices can opt-out at the time of consultation, and therefore, their data will be excluded. Data is collected on a consultation-by-consultation basis and includes information such as species, breed, sex, neuter status, age, owner’s postcode, insurance and microchipping status, and crucial to this study, a free-text clinical narrative outlining the events that occurred within that consultation. At the end of each consultation, veterinary practitioners are given 10 ‘main presenting complaint’ (MPC) groups to categorise the main reason the animal presented; these are gastrointestinal, respiratory, pruritus, tumour, renal, trauma, post-operative checkups, vaccination and, other healthy and other unwell. Sensitive information, such as personal identifiers, is cleaned from the data without further preprocessing. SAVSNET has ethical approval from the University of Liverpool Research Ethics Committee (RETH001081). Table
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
### Dataset availability statement:
|
41 |
The datasets analysed during the current study are not publicly available due to issues surrounding owner confidentiality. Reasonable requests can be made to the SAVSNET Data Access and Publication Panel (savsnet@liverpool.ac.uk) for researchers who meet the criteria for access to confidential data.
|
@@ -44,20 +81,8 @@ The datasets analysed during the current study are not publicly available due to
|
|
44 |
|
45 |
Adaption of the ULMFiT framework was utilised in the production of ‘PetBERT’ based upon minimal modifications to the BERT architecture. Firstly, the pre-trained BERT-base model previously exposed to the general-purpose language of Wikipedia and BooksCorpus was further fine-tuned on the 500 million token dataset of first opinion clinical free-text narratives on a simultaneous training task of Masked Language (MLM) and Next Sentence Prediction (NSP), mimicking the tasks used in the initial pre-training of BERT. For MLM training, 15% of the words within a given clinical narrative were masked randomly across the entire training dataset. The model was tasked to substitute the masked word with a suitable word, requiring a deep bidirectional understanding of the text. For NSP training, sentences between narratives were randomly split and rejoined either to the same sentence or to a random sentence with a [SEP] token in between. The model had to determine whether the new sentence pairs made sense, enabling a cross-sentence understanding of the text. The model had a 10% evaluation set created randomly to calculate a validation loss to determine the number of training epochs required. The training ended when evaluation loss increased, occurring beyond epoch 8, with the final model selected for downstream tasks. Training took 450 hours on a single Nvidia A100 GPU.
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
## Model Examination [optional]
|
51 |
-
|
52 |
-
<!-- Relevant interpretability work for the model goes here -->
|
53 |
-
|
54 |
-
[More Information Needed]
|
55 |
-
|
56 |
## Environmental Impact
|
57 |
|
58 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
59 |
-
|
60 |
-
|
61 |
- **Hardware Type:** 1 x NVidia A100
|
62 |
- **Hours used:** ~450 hours
|
63 |
- **Cloud Provider:** https://www.dur.ac.uk/arc/nvidiacuda/
|
@@ -67,21 +92,22 @@ Adaption of the ULMFiT framework was utilised in the production of ‘PetBERT’
|
|
67 |
## Citation
|
68 |
|
69 |
**BibTeX:**
|
70 |
-
|
71 |
@article{Farrell2023PetBERT:Records,
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
}
|
|
|
|
15 |
## Paper Abstract
|
16 |
Effective public health surveillance requires consistent monitoring of disease signals such that researchers and decision-makers can react dynamically to changes in disease occurrence. However, whilst surveillance initiatives exist in production animal veterinary medicine, comparable frameworks for companion animals are lacking. First-opinion veterinary electronic health records (EHRs) have the potential to reveal disease signals and often represent the initial reporting of clinical syndromes in animals presenting for medical attention, highlighting their possible significance in early disease detection. Yet despite their availability, there are limitations surrounding their free text-based nature, inhibiting the ability for national-level mortality and morbidity statistics to occur. This paper presents PetBERT, a large language model trained on over 500 million words from 5.1 million EHRs across the UK. PetBERT-ICD is the additional training of PetBERT as a multi-label classifier for the automated coding of veterinary clinical EHRs with the International Classification of Disease 11 framework, achieving F1 scores exceeding 83% across 20 disease codings with minimal annotations. PetBERT-ICD effectively identifies disease outbreaks, outperforming current clinician-assigned point-of-care labelling strategies up to 3 weeks earlier. The potential for PetBERT-ICD to enhance disease surveillance in veterinary medicine represents a promising avenue for advancing animal health and improving public health outcomes.
|
17 |
|
18 |
+
- **Developed by:** [Small Animal Veterinary Surveillance Network (SAVSNET)](https://www.liverpool.ac.uk/savsnet/)
|
19 |
- **Model type:** Masked Language Model
|
20 |
- **Language(s) (NLP):** English
|
21 |
- **License:** openrail
|
|
|
28 |
|
29 |
|
30 |
## How to Get Started with the Model
|
31 |
+
```
|
32 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
33 |
+
|
34 |
+
tokenizer = AutoTokenizer.from_pretrained("SAVSNET/PetBERT")
|
35 |
+
model = AutoModelForMaskedLM.from_pretrained("SAVSNET/PetBERT")
|
36 |
+
|
37 |
+
PetBERT_masked = pipeline("fill-mask", model=model, tokenizer=tokenizer)
|
38 |
+
PetBERT('Suspected pneuomina, will require an [MASK] but in the meantime will prescribed antibiotics')
|
39 |
+
|
40 |
+
```
|
41 |
|
42 |
|
43 |
|
|
|
45 |
|
46 |
### Training Data
|
47 |
|
48 |
+
Electronic health records have been collected since March 2014 by SAVSNET, the Small Animal Veterinary Surveillance Network, comprising a sentinel network of 253 volunteer veterinary practices across the United Kingdom. A complete description of SAVSNET has been presented elsewhere5. In summary, based on convenience, veterinary practices with compatible practice management software with the SAVSNET data exchange are recruited. Within these participating practices, data is collected from each booked consultation (where an appointment has been made to see a veterinary practitioner or nurse). All owners within these practices can opt-out at the time of consultation, and therefore, their data will be excluded. Data is collected on a consultation-by-consultation basis and includes information such as species, breed, sex, neuter status, age, owner’s postcode, insurance and microchipping status, and crucial to this study, a free-text clinical narrative outlining the events that occurred within that consultation. At the end of each consultation, veterinary practitioners are given 10 ‘main presenting complaint’ (MPC) groups to categorise the main reason the animal presented; these are gastrointestinal, respiratory, pruritus, tumour, renal, trauma, post-operative checkups, vaccination and, other healthy and other unwell. Sensitive information, such as personal identifiers, is cleaned from the data without further preprocessing. SAVSNET has ethical approval from the University of Liverpool Research Ethics Committee (RETH001081). Table below summarises the cleaned SAVSNET dataset after the above process for cats and dogs only. We segregated EHRs into training and testing sets based on their respective source practices. This stratification approach ensures that the clinical notes used for testing were separate from those generated by clinicians who had contributed to the training sets, thereby fortifying the robustness of our results and mitigating potential bias.
|
49 |
+
|
50 |
+
| Variable | Level | Dogs | Cats |
|
51 |
+
|----------|----------------------|-------------------|-------------------|
|
52 |
+
| Species | Dogs | 5,275,843 | – |
|
53 |
+
| | Cats | – | 2,062,074 |
|
54 |
+
| Sex | Male | 2,710,641 (51.2%) | 1,009,388 (48.1%) |
|
55 |
+
| | Female | 2,565,202 (48.8%) | 1,052,686 (51.9%) |
|
56 |
+
| Country | England | 4,7152,76 (90.4%) | 1,871,536 (91.6%) |
|
57 |
+
| | Scotland | 252,024 (4.8%) | 81,883 (4.2%) |
|
58 |
+
| | Wales | 216,799 (4.2%) | 77,774 (3.9%) |
|
59 |
+
| | Northern Ireland | 34,129 (0.6%) | 6204 (0.3%) |
|
60 |
+
| Age | Infant (0 to 1 year) | 501,339 (11.9%) | 190,534 (9.8%) |
|
61 |
+
| | Adult (1–10 years) | 2,830,739 (64.5%) | 887,640 (51.0%) |
|
62 |
+
| | Senior (10 years) | 1,036,075 (23.6%) | 691,738 (39.2%) |
|
63 |
+
| Neutered | Yes | 3,587,028 (68.0%) | 1,670,280 (81.4%) |
|
64 |
+
| | No | 1,688,093 (32.0%) | 391,794 (19.6%) |
|
65 |
+
| MPC | Gastroenteric | 174,688 (3.3%) | 45,368 (2.4%) |
|
66 |
+
| | Kidney _disease | 14,046 (0.2%) | 18,169 (0.9%) |
|
67 |
+
| | Other _healthy | 1,333,760 (25.4%) | 494,170 (23.7%) |
|
68 |
+
| | Other _unwell | 1,006,031 (19.6%) | 418,107 (21.3%) |
|
69 |
+
| | Post _op | 414,764 (7.8%) | 135,865 (6.6%) |
|
70 |
+
| | Pruritus | 283,880 (5.3%) | 53,869 (2.7%) |
|
71 |
+
| | Respiratory | 52,625 (0.9%) | 27,123 (1.3%) |
|
72 |
+
| | Trauma | 249,039 (4.7%) | 102,646 (4.9%) |
|
73 |
+
| | Tumour | 100,080 (1.8%) | 23,865 (1.1%) |
|
74 |
+
| | Vaccination | 1,639,268 (31.0%) | 739,890 (35.1%) |
|
75 |
+
|
76 |
|
77 |
### Dataset availability statement:
|
78 |
The datasets analysed during the current study are not publicly available due to issues surrounding owner confidentiality. Reasonable requests can be made to the SAVSNET Data Access and Publication Panel (savsnet@liverpool.ac.uk) for researchers who meet the criteria for access to confidential data.
|
|
|
81 |
|
82 |
Adaption of the ULMFiT framework was utilised in the production of ‘PetBERT’ based upon minimal modifications to the BERT architecture. Firstly, the pre-trained BERT-base model previously exposed to the general-purpose language of Wikipedia and BooksCorpus was further fine-tuned on the 500 million token dataset of first opinion clinical free-text narratives on a simultaneous training task of Masked Language (MLM) and Next Sentence Prediction (NSP), mimicking the tasks used in the initial pre-training of BERT. For MLM training, 15% of the words within a given clinical narrative were masked randomly across the entire training dataset. The model was tasked to substitute the masked word with a suitable word, requiring a deep bidirectional understanding of the text. For NSP training, sentences between narratives were randomly split and rejoined either to the same sentence or to a random sentence with a [SEP] token in between. The model had to determine whether the new sentence pairs made sense, enabling a cross-sentence understanding of the text. The model had a 10% evaluation set created randomly to calculate a validation loss to determine the number of training epochs required. The training ended when evaluation loss increased, occurring beyond epoch 8, with the final model selected for downstream tasks. Training took 450 hours on a single Nvidia A100 GPU.
|
83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
## Environmental Impact
|
85 |
|
|
|
|
|
|
|
86 |
- **Hardware Type:** 1 x NVidia A100
|
87 |
- **Hours used:** ~450 hours
|
88 |
- **Cloud Provider:** https://www.dur.ac.uk/arc/nvidiacuda/
|
|
|
92 |
## Citation
|
93 |
|
94 |
**BibTeX:**
|
95 |
+
```
|
96 |
@article{Farrell2023PetBERT:Records,
|
97 |
+
title = {{PetBERT: automated ICD-11 syndromic disease coding for outbreak detection in first opinion veterinary electronic health records}},
|
98 |
+
year = {2023},
|
99 |
+
journal = {Scientific Reports 2023 13:1},
|
100 |
+
author = {Farrell, Sean and Appleton, Charlotte and Noble, Peter John Mäntylä and Al Moubayed, Noura},
|
101 |
+
number = {1},
|
102 |
+
month = {10},
|
103 |
+
pages = {1--14},
|
104 |
+
volume = {13},
|
105 |
+
publisher = {Nature Publishing Group},
|
106 |
+
url = {https://www.nature.com/articles/s41598-023-45155-7},
|
107 |
+
isbn = {0123456789},
|
108 |
+
doi = {10.1038/s41598-023-45155-7},
|
109 |
+
issn = {2045-2322},
|
110 |
+
pmid = {37865683},
|
111 |
+
keywords = {Data mining, Machine learning}
|
112 |
+
}
|
113 |
+
```
|