Edit model card

Model Card for Model ID

The OpenDeid AICUP Suite is a collection of models developed to facilitate deidentification and temporal normalization research (see paper). It contains sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B based on the Pythia Scaling Suite. The 2.8B model was trained on the original OpenDeid-AICUP corpus.

Model Details

Model Description

This model is trained on the full OpenDeid-AICUP corpus released in the ACIUP 2023 competition.

  • Developed by: ISLab
  • Model type: Transformer-based Language Model
  • Language: English
  • License: Apache 2.0
  • Finetuned from model: EleutherAI/pythia-160m

Model Sources

  • Repository: ISLab-git
  • Paper: [More Information Needed]
  • Demo: [More Information Needed]

Uses

The primary intended use of the OpenDeid AICUP Suite is research on the behavior, functionality, and limitations of large language models for the deidentification and normalization tasks proposed in the ACIUP 2023 competition. This suite is intended to provide a controlled setting for performing scientific experiments.

The models in the suite work with the Hugging Face Transformers Library. You may also further fine-tune and adapt the model for deployment, as long as your use is in accordance with the Apache 2.0 license and conduct your own risk and bias assessment.

Direct Use

[More Information Needed]

Out-of-Scope Use

Similar to the original Pythia Suite, the OpenDeid AICUP Suite is not intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case.

The OpenDeid models are English-language only, and are not suitable for translation or generating text in other languages.

OpenDeid-2.8B has been fine-tuned for the sensitive health information recognition and normalization tasks based on a pre-defined format. This means the OpenDeid AICUP Suite will not respond to a given prompt the way a product like ChatGPT does, which was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions.

Bias, Risks, and Limitations

This OpenDeid AICUP models are based on the Pythia models, which were pre-trained on the Pile, and further fine-tuned on the OpenDeid AICUP corpus, a dataset compiled for the sensitive health information and normalization tasks. The fine-tuned models tend to generate outputs in the manner of a pre-defined output layout which may not suiable for downstream tasks like text summarization or translation.

How to Get Started with the Model

Use the code (based on vLLM) below or conventional transformer codes to to get started with the model.

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model = LLM('ISLabResearch/opendeid-2.8b-ft-full')
seed = 309
tokenizer = AutoTokenizer.from_pretrained(Name)
eos = tokenizer.eos_token

params = SamplingParams(max_tokens = 50, include_stop_str_in_output = True, temperature = 0,  
                                     ignore_eos = False, stop = [eos], seed=seed)
preds = model.generate("Hello", params, use_tqdm = False)

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

Sensitive Health Information Recognition Results

Coding Type Precision Recall F-measure Support
MEDICALRECORD 0.7651331 0.8460509 0.8035601 747
PATIENT 0.8121547 0.6159218 0.700556 716
IDNUM 0.8933273 0.9283019 0.9104788 2120
DATE 0.9843959 0.897926 0.9391748 2459
DOCTOR 0.8278772 0.7373009 0.7799682 3327
CITY 0.9522388 0.8552279 0.90113 373
STATE 0.973384 0.7710843 0.8605042 332
ZIP 0.9937888 0.9065156 0.9481481 353
TIME 0.9425287 0.8723404 0.9060773 470
DEPARTMENT 0.7521866 0.6157518 0.6771653 419
HOSPITAL 0.7908788 0.5934892 0.6781116 1198
AGE 0.9565217 0.4313726 0.5945946 51
STREET 0.8181818 0.2093023 0.3333333 344
ORGANIZATION 0.75 0.04054054 0.07692308 74
DURATION 0 0 0 12
SET 0 0 0 5
LOCATION-OTHER 0 0 0 6
PHONE 0 0 0 1
Micro-avg. F 0.8766754 0.7744291 0.8223864 13007
Macro-avg. F 0.6784776 0.5178403 0.587374 13007

Temporal Information Normalization Results

Temporal Type Precision Recall F-measure Support
DATE 0.7993659 0.7177715 0.7563745 2459
TIME 0.6951219 0.606383 0.6477273 470
DURATION 0 0 0 12
SET 0 0 0 5
Micro-avg. 0.7830405 0.6958588 0.7368799 2946
Macro-avg. 0.373622 0.3310386 0.3510436 2946

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
10
Safetensors
Model size
2.78B params
Tensor type
F32
·

Collection including ISLabResearch/opendeid-2.8b-ft-full