Edit model card

Model Card for Model ID

The OpenDeid AICUP Suite is a collection of models developed to facilitate deidentification and temporal normalization research (see paper). It contains sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B based on the Pythia Scaling Suite. The 12B model was trained on the original OpenDeid-AICUP corpus.

Model Details

Model Description

This model is trained on the full OpenDeid-AICUP corpus released in the ACIUP 2023 competition.

  • Developed by: ISLab
  • Model type: Transformer-based Language Model
  • Language: English
  • License: Apache 2.0
  • Finetuned from model: EleutherAI/pythia-160m

Model Sources

  • Repository: ISLab-git
  • Paper: [More Information Needed]
  • Demo: [More Information Needed]

Uses

The primary intended use of the OpenDeid AICUP Suite is research on the behavior, functionality, and limitations of large language models for the deidentification and normalization tasks proposed in the ACIUP 2023 competition. This suite is intended to provide a controlled setting for performing scientific experiments.

The models in the suite work with the Hugging Face Transformers Library. You may also further fine-tune and adapt the model for deployment, as long as your use is in accordance with the Apache 2.0 license and conduct your own risk and bias assessment.

Direct Use

[More Information Needed]

Out-of-Scope Use

Similar to the original Pythia Suite, the OpenDeid AICUP Suite is not intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case.

The OpenDeid models are English-language only, and are not suitable for translation or generating text in other languages.

OpenDeid-12B has been fine-tuned for the sensitive health information recognition and normalization tasks based on a pre-defined format. This means the OpenDeid AICUP Suite will not respond to a given prompt the way a product like ChatGPT does, which was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions.

Bias, Risks, and Limitations

This OpenDeid AICUP models are based on the Pythia models, which were pre-trained on the Pile, and further fine-tuned on the OpenDeid AICUP corpus, a dataset compiled for the sensitive health information and normalization tasks. The fine-tuned models tend to generate outputs in the manner of a pre-defined output layout which may not suiable for downstream tasks like text summarization or translation.

How to Get Started with the Model

Use the code (based on vLLM) below or conventional transformer codes to to get started with the model.

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model = LLM('ISLabResearch/opendeid-12b-ft-full')
seed = 309
tokenizer = AutoTokenizer.from_pretrained(Name)
eos = tokenizer.eos_token

params = SamplingParams(max_tokens = 50, include_stop_str_in_output = True, temperature = 0,  
                                     ignore_eos = False, stop = [eos], seed=seed)
preds = model.generate("Hello", params, use_tqdm = False)

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

Sensitive Health Information Recognition Results

Coding Type Precision Recall F-measure Support
IDNUM 0.7411591 0.7117925 0.726179 2120
DATE 0.8955588 0.7462383 0.8141083 2459
DOCTOR 0.7399783 0.4105801 0.5281268 3327
CITY 0.5793651 0.3914209 0.4672 373
STATE 1 0.6385542 0.7794118 332
ZIP 0.9358108 0.7847025 0.8536209 353
DEPARTMENT 0.846473 0.4868735 0.6181818 419
HOSPITAL 0.7649083 0.5567613 0.6444445 1198
PATIENT 0.1577381 0.07402235 0.1007605 716
MEDICALRECORD 0.6014235 0.2262383 0.3287938 747
ORGANIZATION 0.08536585 0.0945946 0.08974359 74
TIME 0.2985075 0.04255319 0.07448789 470
STREET 0.3888889 0.02034884 0.03867403 344
AGE 1 0.03921569 0.0754717 51
DURATION 0 0 0 12
SET 0 0 0 5
LOCATION-OTHER 0 0 0 6
PHONE 0 0 0 1
Micro-avg. F 0.7536671 0.497732 0.5995277 13007
Macro-avg. F 0.5019543 0.2902165 0.3677879 13007

Temporal Information Normalization Results

Temporal Type Precision Recall F-measure Support
DATE 0.8107961 0.6047174 0.6927556 2459
TIME 0.05263158 0.00212766 0.004089979 470
DURATION 0 0 0 12
SET 0 0 0 5
Micro-avg. 0.8030221 0.5050917 0.6201292 2946
Macro-avg. 0.2158569 0.1517113 0.1781869 2946

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
2
Safetensors
Model size
11.8B params
Tensor type
F32
·

Collection including ISLabResearch/opendeid-12b-ft-full