--- language: - en license: mit library_name: transformers tags: - medical datasets: - Mohammed-Altaf/medical-instruction-120k model-index: - name: Medical-ChatBot results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 30.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mohammed-Altaf/Medical-ChatBot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 38.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mohammed-Altaf/Medical-ChatBot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mohammed-Altaf/Medical-ChatBot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.25 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mohammed-Altaf/Medical-ChatBot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 55.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mohammed-Altaf/Medical-ChatBot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mohammed-Altaf/Medical-ChatBot name: Open LLM Leaderboard --- Please note that the chatbot is designed for research purposes only and is not intended for use in real medical settings. While the chatbot has been trained to provide accurate and helpful responses, it is not a substitute for professional medical advice, diagnosis, or treatment. The information provided by the chatbot should not be used to make medical decisions, and any health concerns should be addressed by a licensed healthcare provider. ## Quickstart ```python import torch from transformers import GPT2LMHeadModel, GPT2Tokenizer path = "Mohammed-Altaf/Medical-ChatBot" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = GPT2Tokenizer.from_pretrained(path) model = GPT2LMHeadModel.from_pretrained(path).to(device) prompt_input = ( "The conversation between human and AI assistant.\n" "[|Human|] {input}\n" "[|AI|]" ) sentence = prompt_input.format_map({'input': "what is parkinson's disease?"}) inputs = tokenizer(sentence, return_tensors="pt").to(device) with torch.no_grad(): beam_output = model.generate(**inputs, min_new_tokens=1, max_length=512, num_beams=3, repetition_penalty=1.2, early_stopping=True, eos_token_id=198 ) print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ## Example Outputs ``` The conversation between human and AI assistant. [|Human|] what is parkinson's disease? [|AI|] Parkinson's disease is a neurodegenerative disorder that affects movement. It is caused by the loss of dopamine-producing cells in the brain. ``` ``` The conversation between human and AI assistant. [|Human|] what type of honey is best for a bad covid cough? [|AI|] Manuka honey has been shown to have anti-inflammatory and antibacterial properties that can help alleviate symptoms of a bad covid cough. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Mohammed-Altaf__Medical-ChatBot) | Metric |Value| |---------------------------------|----:| |Avg. |32.13| |AI2 Reasoning Challenge (25-Shot)|30.55| |HellaSwag (10-Shot) |38.63| |MMLU (5-Shot) |25.98| |TruthfulQA (0-shot) |41.25| |Winogrande (5-shot) |55.41| |GSM8k (5-shot) | 0.99|