metadata
license: llama3.2
datasets:
- XeTute/Medic-Thoughts-16k
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
base_model:
- meta-llama/Llama-3.2-3B
pipeline_tag: text2text-generation
library_name: transformers
tags:
- medical

🚀 A tiny assistant any med-student can self-host
Medic-3B is a (well, obviously) 3.2B parameter LM trained on XeTute/Medic-Thoughts-16k which includes both day-to-day and advanced medical questions. We instruction-tuned LLaMA3.2-3B, a text (not instruct) model, to use the ChatML format and think before answering (with an appropiate system prompt). An example system prompt might look like:
You are a helpful AI assistant. Before you answer any user query, you reason inside following response format: "<think>thoughts come here</think>final, precise answer comes here". While your reasoning process, you think about what information you already know about the query, you summarize it into relevant key-points, you think of what an answer could look like, you verify if it is a "good" / accurate answer, and then plan how you're going to structure your answer, before then giving a final and precise answer to the user's query; always think in the language in which the user asked for the answer section, and if you want, also for the entire thinking process.
If you want a smaller, more specialized model which is still good enough for medical questions answering, check out the 1B version here.
Our Apps & Socials
Chat with our Assistant | Support us Financially | Visit our GitHub
Long live the Islamic Republic of Pakistan; Glory to the Islamic Republic of Pakistan 🇵🇰