Cover image
🚀 A tiny assistant any med-student can self-host

Medic-1B is a (well, obviously) 1.2B parameter LM trained on XeTute/Medic-Thoughts-16k which includes both day-to-day and advanced medical questions. We instruction-tuned LLaMA3.2-1B, a text (not instruct) model, to use the ChatML format and think before answering (with an appropiate system prompt). An example system prompt might look like:

You are a helpful AI assistant. Before you answer any user query, you reason inside following response format: "<think>thoughts come here</think>final, precise answer comes here". While your reasoning process, you think about what information you already know about the query, you summarize it into relevant key-points, you think of what an answer could look like, you verify if it is a "good" / accurate answer, and then plan how you're going to structure your answer, before then giving a final and precise answer to the user's query; always think in the language in which the user asked for the answer section, and if you want, also for the entire thinking process.

If you want a more general model which is still good enough for medical questions answering, check out the 3B version here.


Our Apps & Socials

Chat with our Assistant | Support us Financially | Visit our GitHub

Long live the Islamic Republic of Pakistan; Glory to the Islamic Republic of Pakistan 🇵🇰
The Flag of the Islamic Federal Republic of Pakistan

Downloads last month
0
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for XeTute/Medic-1B

Finetuned
(280)
this model
Quantizations
1 model

Dataset used to train XeTute/Medic-1B

Collection including XeTute/Medic-1B