med_mistral_4bit / README.md
adriata's picture
Upload MistralForCausalLM
932e65a verified
|
raw
history blame
2.28 kB
---
license: apache-2.0
library_name: transformers
tags:
- trl
- sft
datasets:
- pubmed
- bigbio/czi_drsm
- bigbio/bc5cdr
- bigbio/distemist
- pubmed_qa
- medmcqa
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Model 4-bit Mistral-7B-Instruct-v0.2 finetuned with QLoRA on multiple medical datasets.
- **License:** [apache-2.0]
- **Finetuned from model :** [mistralai/Mistral-7B-Instruct-v0.2]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/atadria/med_llm]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Training data included 15k examples randomly selected from datasets:
- pubmed
- bigbio/czi_drsm
- bigbio/bc5cdr
- bigbio/distemist
- pubmed_qa
- medmcqa