--- license: mit datasets: - keivalya/MedQuad-MedicalQnADataset language: - en library_name: adapter-transformers metrics: - accuracy - bertscore - bleu pipeline_tag: summarization tags: - medical --- # K23 MiniMed 모델 카드 K23 MiniMed는 Krew x Huggingface 2023 해커톤에서 원형석 멘토의 지도하에 개발된 Mistral 7b Beta Medical Fine Tune 모델입니다. ## 모델 세부사항 - **개발자:** [Tonic](https://huggingface.co/Tonic) - **후원:** [Tonic](https://huggingface.co/Tonic) - **공유자:** K23-Krew-Hackathon - **모델 유형:** Mistral 7B-Beta Medical Fine Tune - **언어 (NLP):** 영어 - **라이센스:** MIT - **Fine-tuning 기반 모델:** [Zephyr 7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) ### 모델 출처 - **저장소:** [github](https://github.com/Josephrp/AI-challenge-hackathon/blob/master/mistral7b-beta_finetune.ipynb) - **데모:** [pseudolab/K23MiniMed](https://huggingface.co/spaces/pseudolab/K23MiniMed) ## 사용법 이 모델은 교육 목적으로만 의학 질문 답변을 위한 대화형 애플리케이션용입니다. ### 직접 사용 Gradio 챗봇 앱을 만들어 의학적 질문을 하고 대화식으로 답변을 받습니다. ### 하류 사용 이 모델은 교육용으로만 사용됩니다. 추가적인 Fine-tuning과 사용 예시로는 공중 보건 & 위생, 개인 보건 & 위생, 의학 Q & A가 있습니다. ### 추천사항 사용 전에 항상 이 모델을 평가하고 벤치마킹하십시오. 사용 전에 편향을 평가하십시오. 그대로 사용하지 마시고 추가적으로 Fine-tuning하십시오. ## 훈련 세부사항 모델의 훈련 손실은 다음과 같습니다: | 단계 | 훈련 손실 | |------|--------------| | 50 | 0.993800 | | 100 | 0.620600 | | 150 | 0.547100 | | 200 | 0.524100 | | 250 | 0.520500 | | 300 | 0.559800 | | 350 | 0.535500 | | 400 | 0.505400 | ### 훈련 데이터 모델의 학습 가능한 매개변수: 21260288, 모든 매개변수: 3773331456, 학습 가능한 %: 0.5634354746703705. ### 결과 global_step=400에서의 훈련 손실은 0.6008514881134033입니다. ## 환경 영향 모델의 환경 영향은 머신러닝 영향 계산기를 사용하여 계산할 수 있습니다. 추정을 제공하기 위해서는 더 많은 세부 정보가 필요합니다. ## 기술 사양 ### 모델 아키텍처와 목표 모델은 특정 설정을 가진 PeftModelForCausalLM을 사용합니다. ### 컴퓨팅 인프라 #### 하드웨어 모델은 A100 하드웨어에서 훈련되었습니다. #### 소프트웨어 사용된 소프트웨어에는 peft, torch, bitsandbytes, python, 그리고 huggingface가 포함됩니다. ## 모델 카드 작성자 [Tonic](https://huggingface.co/Tonic) ## 모델 카드 연락처 [Tonic](https://huggingface.co/Tonic) # Model Card for K23 MiniMed This is a Mistral 7b Beta Medical Fine Tune with a short number of steps , inspired by [Wonhyeong Seo](https://www.huggingface.co/wseo) great mentorship during Krew x Huggingface 2023 hackathon. ## Model Details ### Model Description - **Developed by:** [Tonic](https://huggingface.co/Tonic) - **Funded by [optional]:** [Tonic](https://huggingface.co/Tonic) - **Shared by [optional]:** K23-Krew-Hackathon - **Model type:** Mistral 7B-Beta Medical Fine Tune - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model [optional]:** [Zephyr 7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) ### Model Sources [optional] - **Repository:** [github](https://github.com/Josephrp/AI-challenge-hackathon/blob/master/mistral7b-beta_finetune.ipynb) - **Demo [optional]:** [pseudolab/K23MiniMed](https://huggingface.co/spaces/pseudolab/K23MiniMed) ## Uses Use this model for conversational applications for medical question and answering **for educational purposes only** ! ### Direct Use Make a gradio chatbot app to ask medical questions and get answers conversationaly. ### Downstream Use [optional] This model is **for educational use only** . Further fine tunes and uses would include : - public health & sanitation - personal health & sanitation - medical Q & A ### Recommendations - always evaluate this model before use - always benchmark this model before use - always evaluate bias before use - do not use as is, fine tune further ## How to Get Started with the Model Use the code below to get started with the model. ```Python from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM, AutoModelForCausalLM, MistralForCausalLM from peft import PeftModel, PeftConfig import torch import gradio as gr import random from textwrap import wrap # Functions to Wrap the Prompt Correctly def wrap_text(text, width=90): lines = text.split('\n') wrapped_lines = [textwrap.fill(line, width=width) for line in lines] wrapped_text = '\n'.join(wrapped_lines) return wrapped_text def multimodal_prompt(user_input, system_prompt="You are an expert medical analyst:"): # Combine user input and system prompt formatted_input = f"[INST]{system_prompt} {user_input}[/INST]" # Encode the input text encodeds = tokenizer(formatted_input, return_tensors="pt", add_special_tokens=False) model_inputs = encodeds.to(device) # Generate a response using the model output = model.generate( **model_inputs, max_length=max_length, use_cache=True, early_stopping=True, bos_token_id=model.config.bos_token_id, eos_token_id=model.config.eos_token_id, pad_token_id=model.config.eos_token_id, temperature=0.1, do_sample=True ) # Decode the response response_text = tokenizer.decode(output[0], skip_special_tokens=True) return response_text # Define the device device = "cuda" if torch.cuda.is_available() else "cpu" # Use the base model's ID base_model_id = "HuggingFaceH4/zephyr-7b-beta" model_directory = "pseudolab/K23_MiniMed" # Instantiate the Tokenizer tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", trust_remote_code=True, padding_side="left") # tokenizer = AutoTokenizer.from_pretrained("Tonic/mistralmed", trust_remote_code=True, padding_side="left") tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = 'left' # Specify the configuration class for the model #model_config = AutoConfig.from_pretrained(base_model_id) # Load the PEFT model with the specified configuration #peft_model = AutoModelForCausalLM.from_pretrained(base_model_id, config=model_config) # Load the PEFT model peft_config = PeftConfig.from_pretrained("pseudolab/K23_MiniMed") peft_model = MistralForCausalLM.from_pretrained("https://huggingface.co/HuggingFaceH4/zephyr-7b-beta", trust_remote_code=True) peft_model = PeftModel.from_pretrained(peft_model, "pseudolab/K23_MiniMed") class ChatBot: def __init__(self): self.history = [] class ChatBot: def __init__(self): # Initialize the ChatBot class with an empty history self.history = [] def predict(self, user_input, system_prompt="You are an expert medical analyst:"): # Combine the user's input with the system prompt formatted_input = f"[INST]{system_prompt} {user_input}[/INST]" # Encode the formatted input using the tokenizer user_input_ids = tokenizer.encode(formatted_input, return_tensors="pt") # Generate a response using the PEFT model response = peft_model.generate(input_ids=user_input_ids, max_length=512, pad_token_id=tokenizer.eos_token_id) # Decode the generated response to text response_text = tokenizer.decode(response[0], skip_special_tokens=True) return response_text # Return the generated response bot = ChatBot() title = "👋🏻토닉의 미스트랄메드 채팅에 오신 것을 환영합니다🚀👋🏻Welcome to Tonic's MistralMed Chat🚀" description = "이 공간을 사용하여 현재 모델을 테스트할 수 있습니다. [(Tonic/MistralMed)](https://huggingface.co/Tonic/MistralMed) 또는 이 공간을 복제하고 로컬 또는 🤗HuggingFace에서 사용할 수 있습니다. [Discord에서 함께 만들기 위해 Discord에 가입하십시오](https://discord.gg/VqTxc76K3u). You can use this Space to test out the current model [(Tonic/MistralMed)](https://huggingface.co/Tonic/MistralMed) or duplicate this Space and use it locally or on 🤗HuggingFace. [Join me on Discord to build together](https://discord.gg/VqTxc76K3u)." examples = [["[Question:] What is the proper treatment for buccal herpes?", "You are a medicine and public health expert, you will receive a question, answer the question, and provide a complete answer"]] iface = gr.Interface( fn=bot.predict, title=title, description=description, examples=examples, inputs=["text", "text"], # Take user input and system prompt separately outputs="text", theme="ParityError/Anime" ) iface.launch() ``` ## Training Details | Step | Training Loss | |------|--------------| | 50 | 0.993800 | | 100 | 0.620600 | | 150 | 0.547100 | | 200 | 0.524100 | | 250 | 0.520500 | | 300 | 0.559800 | | 350 | 0.535500 | | 400 | 0.505400 | ### Training Data ```json {trainable params: 21260288 || all params: 3773331456 || trainable%: 0.5634354746703705} ``` ### Training Procedure #### Preprocessing [optional] Lora32bits #### Speeds, Sizes, Times [optional] ```json metrics={'train_runtime': 1700.1608, 'train_samples_per_second': 1.882, 'train_steps_per_second': 0.235, 'total_flos': 9.585300996096e+16, 'train_loss': 0.6008514881134033, 'epoch': 0.2}) ``` ### Results ```json TrainOutput global_step=400, training_loss=0.6008514881134033 ``` #### Summary ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}} - **Hours used:** {{ hours_used | default("[More Information Needed]", true)}} - **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}} - **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}} - **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}} ## Technical Specifications ### Model Architecture and Objective ```python PeftModelForCausalLM( (base_model): LoraModel( (model): MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False) ) (k_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) ) (v_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) ) (o_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False) ) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=14336, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=14336, bias=False) ) (up_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=14336, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=4096, out_features=14336, bias=False) ) (down_proj): Linear4bit( (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=14336, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() (base_layer): Linear4bit(in_features=14336, out_features=4096, bias=False) ) (act_fn): SiLUActivation() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): Linear( in_features=4096, out_features=32000, bias=False (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=32000, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) ) ) ) ``` ### Compute Infrastructure #### Hardware A100 #### Software peft , torch, bitsandbytes, python, huggingface ## Model Card Authors [optional] [Tonic](https://huggingface.co/Tonic) ## Model Card Contact [Tonic](https://huggingface.co/Tonic)