Datasets:
license: mit
task_categories:
- text-classification
tags:
- medical
size_categories:
- 10K<n<100K
CTMatch Dataset
This is a combined set of 2 labelled datasets of:
topic (patient descriptions), doc (clinical trials documents - selected fields), and label ({0, 1, 2})
triples, in jsonl format.
(Somewhat of a duplication of some of the ir_dataset
also available on HF.)
These have been processed using ctproc, and in this state can be used by various tokenizers for fine-tuning (see ctmatch for examples).
These 2 datasets contain no patient identifying information are openly available in raw forms:
TREC: http://www.trec-cds.org/2021.html
CSIRO: https://data.csiro.au/collection/csiro:17152
Additionally, for the IR task, other feature representations of the unlabelled documents have been created.
Each of these has exactly 374648 lines of corresponding data:
doc_texts.csv
- texts extracted from processed documents using several fields including eligbility min and max age, and eligbility criteria, structured as this example from NCT00000102: "Inclusion Criteria: diagnosed with Congenital Adrenal Hyperplasia (CAH) normal ECG during baseline evaluation, Exclusion Criteria: history of liver disease, or elevated liver function tests history of cardiovascular disease"
doc_categories.csv
:
- 1 x 15 vectors of somewhat arbitrarily chosen topic probabilities (softmax output) generated by zero-shot classification model, CTMatch.category_model(doc['condition']) lexically ordered as such: cancer,cardiac,endocrine,gastrointestinal,genetic,healthy,infection,neurological,other,pediatric,psychological,pulmonary,renal,reproductive
doc_embeddings.csv
:
- 1 x 384 vectors of embeddings taken from last hidden state of CTMatch.embedding_model.encode(doc_text) using SentenceTransformers
index2docid.csv
:
- simple mapping of index to NCTID's for filtering/reference throughout IR program, corresponding to vector, texts representation order
see repo for more information: https://github.com/semajyllek/ctmatch