Edit model card

Model Card for DMX-QWEN-2-7B-AVOCADO

Model Details

Model Description

DMX-QWEN-2-7B-AVOCADO is a specialized model based on Qwen-2-7b, fine-tuned using a LoRA (Low-Rank Adaptation) technique and merged back into the base model. The model has been trained specifically to map Chinese medicine concepts to evidence-based medicine.

  • Developed by: 2billionbeats Limited
  • Model type: LoRA fine-tuned transformer model
  • Language(s) (NLP): Chinese, English
  • License: MIT
  • Finetuned from model [optional]: Qwen-2-7b

Uses

Direct Use

This model can be used directly for tasks that involve mapping Chinese medicine concepts to evidence-based medicine terminologies and practices. It can be employed in applications such as medical text analysis, clinical decision support, and educational tools for traditional Chinese medicine.

Out-of-Scope Use

This model is not designed for general-purpose language tasks outside the specified domain of Chinese medicine and evidence-based medicine. It should not be used for critical medical decision-making without proper human oversight.

Bias, Risks, and Limitations

This model may contain biases present in the training data, particularly those related to cultural perspectives on medicine. It should not be used as the sole source of medical advice or decision-making. The limitations of the model in accurately representing both Chinese and evidence-based medical concepts should be recognized.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. It is recommended to use this model in conjunction with other medical resources and professional expertise.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("2billionbeats/DM-QWEN-2-7B-AVOCADO")
model = AutoModelForCausalLM.from_pretrained("2billionbeats/DM-QWEN-2-7B-AVOCADO")

input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))

Training Details

Training Data

The model was trained on a dataset specifically curated to include mappings between Chinese medicine and evidence-based medicine. [Link to the Dataset Card]

Training Procedure

Preprocessing [optional]

The training data underwent preprocessing to ensure the accurate representation of both Chinese medicine and evidence-based medicine terminologies.

Training Hyperparameters

  • Training regime: fp16 mixed precision

Evaluation

Testing Data, Factors & Metrics

Testing Data

The model was evaluated using a separate test set containing mappings between Chinese and evidence-based medicine. [Link to Dataset Card]

Factors

The evaluation considered various subpopulations and domains within the medical texts to ensure broad applicability.

Metrics

The evaluation metrics included accuracy, precision, recall, and F1 score, chosen for their relevance in assessing the model's performance in text classification tasks.

Summary

The model demonstrates strong performance in mapping Chinese medicine concepts to evidence-based medicine, with high accuracy and balanced precision and recall.

Model Examination

Further interpretability work is needed to understand the model's decision-making process better.

Model Architecture and Objective

The model is based on the Qwen-2-7b architecture, fine-tuned using LoRA to adapt it for the specific task of mapping Chinese medicine to evidence-based medicine.

Compute Infrastructure

Hardware

The training was conducted on NVIDIA A100 GPUs.

Software

The training utilized PyTorch and the Hugging Face Transformers library.

Downloads last month
13
Safetensors
Model size
7.62B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.