library_name: peft
base_model: mistralai/Mistral-7B-v0.1
license: mit
datasets:
- keivalya/MedQuad-MedicalQnADataset
language:
- en
metrics:
- bertscore
tags:
- medical
Model Card for Model ID
This is a medicine-focussed mistral fine tuned using keivalya/MedQuad-MedicalQnADataset
Model Details
Model Description
Trying to get better at medical Q & A
- Developed by: Tonic
- Shared by [optional]: Tonic
- Model type: Mistral Fine-Tune
- Language(s) (NLP): English
- License: MIT2.0
- Finetuned from model [optional]: mistralai/Mistral-7B-v0.1
Model Sources [optional]
- Repository: [Tonic/mistralmed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [Tonic/MistralMed_Chat]
Uses
This model can be used the same way you normally use mistral
Direct Use
This model can do better in medical question and answer scenarios.
Downstream Use [optional]
This model is intended to be further fine tuned.
Recommendations
- Do Not Use As Is
- Fine Tune This Model Further
- For Educational Purposes Only
- Benchmark your model usage
- Evaluate the model before use
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
Training Details
Training Data
Training Procedure
Dataset({ features: ['qtype', 'Question', 'Answer'], num_rows: 16407 })
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 12628, 264, 2718, 12271, 5122, 272, 14164, 5746, 9283, 302, 272, 2787, 12271, 390, 264, 2692, 908, 395, 9623, 304, 6836, 3069, 28723, 13, 3260, 908, 1023, 6685, 272, 2718, 1423, 24329, 304, 272, 908, 1580, 347, 624, 302, 272, 2296, 5936, 262, 674, 647, 464, 3134, 647, 464, 28721, 495, 28730, 410, 262, 296, 647, 464, 19928, 647, 464, 14876, 28730, 9122, 647, 464, 28713, 16939, 647, 464, 3134, 28730, 720, 11009, 352, 647, 464, 267, 1805, 416, 647, 464, 3134, 28730, 9122, 14303, 13, 1014, 9623, 1580, 347, 624, 302, 272, 2296, 28747, 5936, 861, 647, 464, 5128, 28730, 11023, 28730, 1408, 647, 464, 11023, 28730, 4395, 647, 464, 16239, 263, 647, 464, 274, 9312, 647, 464, 28599, 647, 464, 2383, 411, 647, 464, 7449, 28730, 4837, 8524, 647, 464, 3537, 28730, 13102, 7449, 647, 464, 10470, 28713, 647, 464, 13952, 28730, 266, 28730, 2453, 314, 647, 464, 3537, 28730, 8502, 28730, 11023, 647, 464, 3537, 28730, 7502, 28730, 11023, 647, 464, 4101, 3591, 1421, 13, 13, 27332, 15255, 12271, 28747, 13, 3195, 460, 272, 19724, 354, 393, 1082, 721, 402, 4475, 294, 689, 6519, 300, 3250, 17428, 325, 9162, 28755, 28731, 1550, 13, 13, 27332, 11736, 288, 9283, 28747, 13, 28741, 331, 19742, 1683, 288, 17428, 28725, 481, 358, 721, 282, 17428, 28725, 442, 1683, 20837, 636, 721, 282, 17428, 6948, 6556, 1837, 304, 27729, 5827, 2818, 356, 2425, 472, 28723, 23331, 28733, 21255, 314, 3076, 695, 10747, 28725, 1259, 390, 16779, 294, 8731, 17653, 28725, 993, 347, 4525, 916, 2948, 10139, 28723, 5800, 7193, 506, 4894, 369, 13147, 494, 361, 262, 28725, 264, 7876, 1307, 298, 3363, 2856, 799, 7692, 282, 18257, 28725, 349, 5645, 1835, 393, 15155, 28790, 297, 11781, 311, 28725, 736, 349, 708, 6740, 5566, 298, 1760, 871, 11935, 938, 354, 5827, 302, 393, 15155, 297, 10589, 28723, 13, 2]
MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (k_proj): Linear4bit(in_features=4096, out_features=1024, bias=False) (v_proj): Linear4bit(in_features=4096, out_features=1024, bias=False) (o_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (up_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (down_proj): Linear4bit(in_features=14336, out_features=4096, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) )
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: config = LoraConfig( r=8, lora_alpha=16, target_modules=[ "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "lm_head", ], bias="none", lora_dropout=0.05, # Conventional task_type="CAUSAL_LM", )
Speeds, Sizes, Times [optional]
trainable params: 21260288 || all params: 3773331456 || trainable%: 0.5634354746703705 TrainOutput(global_step=1000, training_loss=0.47226515007019043, metrics={'train_runtime': 3143.4141, 'train_samples_per_second': 2.545, 'train_steps_per_second': 0.318, 'total_flos': 1.75274075357184e+17, 'train_loss': 0.47226515007019043, 'epoch': 0.49}) [More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Training Results
[1000/1000 52:20, Epoch 0/1] Step Training Loss 50 0.474200 100 0.523300 150 0.484500 200 0.482800 250 0.498800 300 0.451800 350 0.491800 400 0.488000 450 0.472800 500 0.460400 550 0.464700 600 0.484800 650 0.474600 700 0.477900 750 0.445300 800 0.431300 850 0.461500 900 0.451200 950 0.470800 1000 0.454900
Model Architecture and Objective
PeftModelForCausalLM( (base_model): LoraModel( (model): MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False) ) (k_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) ) (v_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) ) (o_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False) ) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=14336, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=14336, bias=False) ) (up_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=14336, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=14336, bias=False) ) (down_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=14336, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=14336, out_features=4096, bias=False) ) (act_fn): SiLUActivation() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): Linear( in_features=4096, out_features=32000, bias=False (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=32000, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) ) ) )
Hardware
A100
Model Card Authors [optional]
Model Card Contact
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
Framework versions
- PEFT 0.6.0.dev0