File size: 3,225 Bytes
4ade08c
203f78a
4ade08c
 
 
 
 
 
 
 
 
471f7d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ade08c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
title: Lab2
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.0.1
app_file: app.py
pinned: false
---

# Fine-Tuned Medical Language Model

## Overview
This project fine-tunes the LLaMA 3.2 3B model using the **FineTome-100k** instruction dataset. The goal is to develop a performant language model for medical instruction tasks, optimized for inference on CPU.

## Key Features
- **Base Model**: LLaMA 3.2 3B (fine-tuned with Hugging Face Transformers and Unsloth).
- **Dataset**: FineTome-100k, a high-quality instruction dataset.
- **Inference Optimization**: Quantized to GGUF format for faster CPU inference using methods like Q4_K_M.

## Improvements
### Model-Centric Approach
1. **Hyperparameter Tuning**:
   - **Learning Rate**: Reduced to `1e-4` and tested against `2e-4` for better generalization.
   - **Warmup Steps**: Increased to 100 to stabilize early training.
   - **Batch Size**: Adjusted via gradient accumulation to simulate larger effective batch sizes.

2. **Fine-Tuning Techniques**:
   - Resumed training from a 3,000-step checkpoint to save time.
   - Applied `adamw_8bit` optimizer for memory-efficient training.

3. **Experimentation with Foundation Models**:
   - Tested alternative open-source models, including Falcon-7B and Mistral 3B, for comparison.

### Data-Centric Approach
1. **Additional Data Sources**:
   - Plans to augment training with datasets like PubMedQA or MedQA for domain-specific improvements.
   - Diversity of instructions to improve robustness across medical queries.

2. **Dataset Analysis**:
   - Addressed class imbalances and ensured validation split consistency.

## Hyperparameters
The final training used the following hyperparameters:
- **Learning Rate**: 1e-4
- **Warmup Steps**: 100
- **Batch Size**: Simulated effective batch size of 8 (2 samples per device with 4 gradient accumulation steps).
- **Optimizer**: AdamW (8-bit quantization).
- **Weight Decay**: 0.01
- **Learning Rate Scheduler**: Linear decay.

## Model Performance
### Training
- **Steps**: Fine-tuned for 6,000 steps total (3,000 initial + 3,000 resumed).
- **Validation Loss**: Improved from X to Y during fine-tuning.

### Inference
- **Quantized Format**: Q4_K_M and F16 formats evaluated for inference speed.
- **CPU Latency**: Achieved X ms per query on a single-core CPU.

## Next Steps
1. Continue fine-tuning with additional data sources (e.g., MedQA).
2. Explore LoRA or parameter-efficient tuning for larger models.
3. Deploy and evaluate the model in real-world scenarios.

## Usage
To load and use the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "forestav/medical_model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate predictions
inputs = tokenizer("What are the symptoms of diabetes?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).