File size: 3,249 Bytes
9c38c88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
597da2d
9c38c88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1dfaa50
6eb3330
9c38c88
 
 
 
1896b6f
9c38c88
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: llama3.2
datasets:
- XeTute/Medic-Thoughts-16k
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
base_model:
- meta-llama/Llama-3.2-3B
pipeline_tag: text2text-generation
library_name: transformers
tags:
- medical
---

<div align="center">
<span style="font-family: default; font-size: 1.5em;">
  <img alt="Cover image" src="https://cdn-uploads.huggingface.co/production/uploads/65ca8c3c5495933ab066c33c/x3fj5eKoS11GdoGctR-Q2.png"/>
</span>
<div>
🚀 A tiny assistant any med-student can self-host
</div>
</div>
<br>
<div align="center" style="line-height: 1;">
  <a href="https://github.com/XeTute" style="margin: 2px;">
    <img alt="Code" src="https://img.shields.io/badge/XeTute-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
  </a>
  <a href="https://ko-fi.com/XeTute" style="margin: 2px;">
    <img alt="Ko-Fi" src="https://img.shields.io/badge/Buy_us_a_coffe-000000?style=for-the-badge&logo=kofi&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
  </a>
  <a href="https://xetute.com/" style="margin: 2px;">
    <img alt="Ko-Fi" src="https://img.shields.io/badge/Webpage-000000?style=for-the-badge&logo=githubpages" style="display: inline-block; vertical-align: middle;"/>
  </a>
  <a href="https://bsky.app/profile/xetute.bsky.social" style="margin: 2px;">
    <img alt="Ko-Fi" src="https://img.shields.io/badge/BlueSky-000000?style=for-the-badge&logo=bluesky" style="display: inline-block; vertical-align: middle;"/>
  </a>
</div>
</div>
</div>

Medic-3B is a (well, obviously) 3.2B parameter LM trained on [XeTute/Medic-Thoughts-16k](https://huggingface.co/datasets/XeTute/Medic-Thoughts-16k) which includes both day-to-day and advanced medical questions.
We instruction-tuned LLaMA3.2-3B, a text (not instruct) model, to use the ChatML format and think before answering (with an appropiate system prompt). 
An example system prompt might look like:
```
You are a helpful AI assistant. Before you answer any user query, you reason inside following response format: "<think>thoughts come here</think>final, precise answer comes here". While your reasoning process, you think about what information you already know about the query, you summarize it into relevant key-points, you think of what an answer could look like, you verify if it is a "good" / accurate answer, and then plan how you're going to structure your answer, before then giving a final and precise answer to the user's query; always think in the language in which the user asked for the answer section, and if you want, also for the entire thinking process.
```
If you want a smaller, more specialized model which is still good enough for medical questions answering, check out the 1B version [here](https://huggingface.co/XeTute/Medic-1B).

---
# Our Apps & Socials
[Chat with our Assistant](https://xetute.com/) | [Support us Financially](https://ko-fi.com/XeTute) | [Visit our GitHub](https://github.com/XeTute)  

Long live the Islamic Republic of Pakistan; Glory to the Islamic Republic of Pakistan 🇵🇰  
![The Flag of the Islamic Federal Republic of Pakistan](https://upload.wikimedia.org/wikipedia/commons/3/32/Flag_of_Pakistan.svg)