ndc8
commited on
Commit
·
91181f3
1
Parent(s):
9fe463f
update
Browse files- README.md +62 -0
- README_DEPLOY_HF.md +68 -0
- handler.py +23 -0
- requirements.txt +5 -12
- sample_data/mini_test.jsonl +2 -0
- training/train_gemma_unsloth.py +85 -125
- training_runs/devlocal/meta.json +4 -4
- training_runs/realtrain/DONE +1 -0
- training_runs/realtrain/meta.json +6 -0
- training_runs/testload/DONE +1 -0
- training_runs/testload/meta.json +6 -0
README.md
CHANGED
@@ -1,4 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
2 |
title: Multimodal AI Backend Service
|
3 |
emoji: 🚀
|
4 |
colorFrom: yellow
|
@@ -6,6 +67,7 @@ colorTo: purple
|
|
6 |
sdk: docker
|
7 |
app_port: 8000
|
8 |
pinned: false
|
|
|
9 |
---
|
10 |
|
11 |
# firstAI - Multimodal AI Backend 🚀
|
|
|
1 |
+
# Fine-tuning Gemma 3n E4B on MacBook M1 (Apple Silicon) with Unsloth
|
2 |
+
|
3 |
+
This project supports local fine-tuning of the Gemma 3n E4B model using Unsloth, PEFT/LoRA, and export to GGUF Q4_K_XL for efficient inference. The workflow is optimized for Apple Silicon (M1/M2/M3) and avoids CUDA/bitsandbytes dependencies.
|
4 |
+
|
5 |
+
## Prerequisites
|
6 |
+
|
7 |
+
- Python 3.10+
|
8 |
+
- macOS with Apple Silicon (M1/M2/M3)
|
9 |
+
- PyTorch with MPS backend (install via `pip install torch`)
|
10 |
+
- All dependencies in `requirements.txt` (install with `pip install -r requirements.txt`)
|
11 |
+
|
12 |
+
## Training Script Usage
|
13 |
+
|
14 |
+
Run the training script with your dataset (JSON/JSONL or Hugging Face format):
|
15 |
+
|
16 |
+
```bash
|
17 |
+
python training/train_gemma_unsloth.py \
|
18 |
+
--job-id myjob \
|
19 |
+
--output-dir training_runs/myjob \
|
20 |
+
--dataset sample_data/train.jsonl \
|
21 |
+
--prompt-field prompt --response-field response \
|
22 |
+
--epochs 1 --batch-size 1 --gradient-accumulation 8 \
|
23 |
+
--use-fp16 \
|
24 |
+
--grpo --cpt \
|
25 |
+
--export-gguf --gguf-out training_runs/myjob/adapter-gguf-q4_k_xl
|
26 |
+
```
|
27 |
+
|
28 |
+
**Flags:**
|
29 |
+
|
30 |
+
- `--grpo`: Enable GRPO (if supported by Unsloth)
|
31 |
+
- `--cpt`: Enable CPT (if supported by Unsloth)
|
32 |
+
- `--export-gguf`: Export to GGUF Q4_K_XL after training
|
33 |
+
- `--gguf-out`: Path to save GGUF file
|
34 |
+
|
35 |
+
**Notes:**
|
36 |
+
|
37 |
+
- On Mac, bitsandbytes/xformers are disabled automatically.
|
38 |
+
- Training is slower than on CUDA GPUs; use small batch sizes and gradient accumulation.
|
39 |
+
- If Unsloth's GGUF export is unavailable, follow the printed instructions to use llama.cpp's `convert-hf-to-gguf.py`.
|
40 |
+
|
41 |
+
## Troubleshooting
|
42 |
+
|
43 |
+
- If you see errors about missing CUDA or bitsandbytes, ensure you are running on Apple Silicon and have the latest Unsloth/Transformers.
|
44 |
+
- For memory errors, reduce `--batch-size` or `--cutoff-len`.
|
45 |
+
- For best results, use datasets formatted to match the official Gemma 3n chat template.
|
46 |
+
|
47 |
+
## Example: Manual GGUF Export with llama.cpp
|
48 |
+
|
49 |
+
If the script prints a message about manual conversion, run:
|
50 |
+
|
51 |
+
```bash
|
52 |
+
python convert-hf-to-gguf.py --outtype q4_k_xl --outfile training_runs/myjob/adapter-gguf-q4_k_xl training_runs/myjob/adapter
|
53 |
+
```
|
54 |
+
|
55 |
+
## References
|
56 |
+
|
57 |
+
- [Unsloth Documentation](https://unsloth.ai/)
|
58 |
+
- [Gemma 3n E4B Model Card](https://huggingface.co/unsloth/gemma-3n-E4B-it)
|
59 |
+
- [llama.cpp GGUF Export Guide](https://github.com/ggerganov/llama.cpp)
|
60 |
+
|
61 |
---
|
62 |
+
|
63 |
title: Multimodal AI Backend Service
|
64 |
emoji: 🚀
|
65 |
colorFrom: yellow
|
|
|
67 |
sdk: docker
|
68 |
app_port: 8000
|
69 |
pinned: false
|
70 |
+
|
71 |
---
|
72 |
|
73 |
# firstAI - Multimodal AI Backend 🚀
|
README_DEPLOY_HF.md
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Hugging Face Inference Endpoint: Gemma-3n-E4B-it LoRA Adapter
|
2 |
+
|
3 |
+
This repository provides a LoRA adapter fine-tuned on top of a Hugging Face Transformers model (e.g., Gemma-3n-E4B-it) using PEFT. It is ready to be deployed as a Hugging Face Inference Endpoint.
|
4 |
+
|
5 |
+
## How to Deploy as an Endpoint
|
6 |
+
|
7 |
+
1. **Upload the `adapter` directory (produced by training) to your Hugging Face Hub repository.**
|
8 |
+
|
9 |
+
- The directory should contain `adapter_config.json`, `adapter_model.bin`, and tokenizer files.
|
10 |
+
|
11 |
+
2. **Add a `handler.py` file to define the endpoint logic.**
|
12 |
+
|
13 |
+
3. **Push to the Hugging Face Hub.**
|
14 |
+
|
15 |
+
4. **Deploy as an Inference Endpoint via the Hugging Face UI.**
|
16 |
+
|
17 |
+
---
|
18 |
+
|
19 |
+
## Example `handler.py`
|
20 |
+
|
21 |
+
This file loads the base model and LoRA adapter, and exposes a `__call__` method for inference.
|
22 |
+
|
23 |
+
```python
|
24 |
+
from typing import Dict, Any
|
25 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
26 |
+
from peft import PeftModel, PeftConfig
|
27 |
+
import torch
|
28 |
+
|
29 |
+
class EndpointHandler:
|
30 |
+
def __init__(self, path="."):
|
31 |
+
# Load base model and tokenizer
|
32 |
+
base_model_id = "<BASE_MODEL_ID>" # e.g., "google/gemma-2b"
|
33 |
+
self.tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
|
34 |
+
base_model = AutoModelForCausalLM.from_pretrained(base_model_id, trust_remote_code=True)
|
35 |
+
# Load LoRA adapter
|
36 |
+
self.model = PeftModel.from_pretrained(base_model, f"{path}/adapter")
|
37 |
+
self.model.eval()
|
38 |
+
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
39 |
+
self.model.to(self.device)
|
40 |
+
|
41 |
+
def __call__(self, data: Dict[str, Any]) -> Dict[str, Any]:
|
42 |
+
prompt = data["inputs"] if isinstance(data, dict) else data
|
43 |
+
inputs = self.tokenizer(prompt, return_tensors="pt").to(self.device)
|
44 |
+
with torch.no_grad():
|
45 |
+
output = self.model.generate(**inputs, max_new_tokens=256)
|
46 |
+
decoded = self.tokenizer.decode(output[0], skip_special_tokens=True)
|
47 |
+
return {"generated_text": decoded}
|
48 |
+
```
|
49 |
+
|
50 |
+
- Replace `<BASE_MODEL_ID>` with the correct base model (e.g., `google/gemma-2b`).
|
51 |
+
- The endpoint will accept a JSON payload with an `inputs` field containing the prompt.
|
52 |
+
|
53 |
+
---
|
54 |
+
|
55 |
+
## Notes
|
56 |
+
|
57 |
+
- Make sure your `requirements.txt` includes `transformers`, `peft`, and `torch`.
|
58 |
+
- For large models, use an Inference Endpoint with GPU.
|
59 |
+
- You can customize the handler for chat formatting, streaming, etc.
|
60 |
+
|
61 |
+
---
|
62 |
+
|
63 |
+
## Quickstart
|
64 |
+
|
65 |
+
1. Train your adapter with `train_gemma_unsloth.py`.
|
66 |
+
2. Upload the `adapter` directory and `handler.py` to your Hugging Face repo.
|
67 |
+
3. Deploy as an Inference Endpoint.
|
68 |
+
4. Send requests to your endpoint!
|
handler.py
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import Dict, Any
|
2 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
3 |
+
from peft import PeftModel
|
4 |
+
import torch
|
5 |
+
|
6 |
+
class EndpointHandler:
|
7 |
+
def __init__(self, path="."):
|
8 |
+
# Set your base model here (must match the one used for LoRA training)
|
9 |
+
base_model_id = "google/gemma-2b" # CHANGE if you used a different base
|
10 |
+
self.tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
|
11 |
+
base_model = AutoModelForCausalLM.from_pretrained(base_model_id, trust_remote_code=True)
|
12 |
+
self.model = PeftModel.from_pretrained(base_model, f"{path}/adapter")
|
13 |
+
self.model.eval()
|
14 |
+
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
15 |
+
self.model.to(self.device)
|
16 |
+
|
17 |
+
def __call__(self, data: Dict[str, Any]) -> Dict[str, Any]:
|
18 |
+
prompt = data["inputs"] if isinstance(data, dict) else data
|
19 |
+
inputs = self.tokenizer(prompt, return_tensors="pt").to(self.device)
|
20 |
+
with torch.no_grad():
|
21 |
+
output = self.model.generate(**inputs, max_new_tokens=256)
|
22 |
+
decoded = self.tokenizer.decode(output[0], skip_special_tokens=True)
|
23 |
+
return {"generated_text": decoded}
|
requirements.txt
CHANGED
@@ -1,12 +1,5 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
unsloth>=2024.7.0
|
7 |
-
datasets>=2.20.0
|
8 |
-
trl>=0.9.6
|
9 |
-
peft>=0.11.1
|
10 |
-
transformers>=4.36.0
|
11 |
-
torch>=2.0.0
|
12 |
-
accelerate>=0.24.0
|
|
|
1 |
+
|
2 |
+
transformers
|
3 |
+
peft
|
4 |
+
torch
|
5 |
+
datasets
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sample_data/mini_test.jsonl
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
{"prompt": "What is 2+2?", "response": "2+2 is 4."}
|
2 |
+
{"prompt": "What color is the sky?", "response": "The sky is blue."}
|
training/train_gemma_unsloth.py
CHANGED
@@ -28,42 +28,20 @@ def _import_training_libs() -> Dict[str, Any]:
|
|
28 |
If mode=="hf": AutoTokenizer, AutoModelForCausalLM, get_peft_model, LoraConfig, torch
|
29 |
"""
|
30 |
# Avoid heavy optional deps on macOS (no xformers/bitsandbytes)
|
31 |
-
os.environ.setdefault("UNSLOTH_DISABLE_XFORMERS", "1")
|
32 |
-
os.environ.setdefault("UNSLOTH_DISABLE_BITSANDBYTES", "1")
|
33 |
from datasets import load_dataset
|
34 |
-
from
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
logger.warning(
|
48 |
-
"Primary Unsloth import failed, falling back to HF+PEFT: %s",
|
49 |
-
e,
|
50 |
-
exc_info=True,
|
51 |
-
)
|
52 |
-
# Fallback: pure HF + PEFT (CPU / MPS friendly)
|
53 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
54 |
-
from peft import get_peft_model, LoraConfig
|
55 |
-
import torch
|
56 |
-
return {
|
57 |
-
"mode": "hf",
|
58 |
-
"load_dataset": load_dataset,
|
59 |
-
"SFTTrainer": SFTTrainer,
|
60 |
-
"SFTConfig": SFTConfig,
|
61 |
-
"AutoTokenizer": AutoTokenizer,
|
62 |
-
"AutoModelForCausalLM": AutoModelForCausalLM,
|
63 |
-
"get_peft_model": get_peft_model,
|
64 |
-
"LoraConfig": LoraConfig,
|
65 |
-
"torch": torch,
|
66 |
-
}
|
67 |
|
68 |
|
69 |
def parse_args():
|
@@ -87,6 +65,10 @@ def parse_args():
|
|
87 |
p.add_argument("--use-fp16", dest="use_fp16", action="store_true")
|
88 |
p.add_argument("--seed", type=int, default=42)
|
89 |
p.add_argument("--dry-run", dest="dry_run", action="store_true", help="Write DONE and exit without training (for CI)")
|
|
|
|
|
|
|
|
|
90 |
return p.parse_args()
|
91 |
|
92 |
|
@@ -127,74 +109,46 @@ def main():
|
|
127 |
# Training imports (supports Unsloth fast path and HF fallback)
|
128 |
libs: Dict[str, Any] = _import_training_libs()
|
129 |
load_dataset = libs["load_dataset"]
|
130 |
-
|
131 |
-
|
|
|
|
|
|
|
|
|
|
|
132 |
|
133 |
-
# Environment for stability on T4 etc per Unsloth guidance
|
134 |
os.environ.setdefault("PYTORCH_CUDA_ALLOC_CONF", "expandable_segments:True")
|
135 |
os.environ.setdefault("TOKENIZERS_PARALLELISM", "false")
|
136 |
|
137 |
print(f"[train] Loading base model: {args.model_id}")
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
load_in_4bit=False,
|
146 |
-
dtype=None,
|
147 |
-
use_gradient_checkpointing="unsloth",
|
148 |
-
)
|
149 |
-
# Prepare LoRA via Unsloth helper
|
150 |
-
print("[train] Attaching LoRA adapter (Unsloth)")
|
151 |
-
model = FastLanguageModel.get_peft_model(
|
152 |
-
model,
|
153 |
-
r=args.lora_r,
|
154 |
-
lora_alpha=args.lora_alpha,
|
155 |
-
lora_dropout=0,
|
156 |
-
bias="none",
|
157 |
-
target_modules=["q_proj","k_proj","v_proj","o_proj","gate_proj","up_proj","down_proj"],
|
158 |
-
use_rslora=True,
|
159 |
-
loftq_config=None,
|
160 |
-
)
|
161 |
-
else:
|
162 |
-
# HF + PEFT fallback (CPU / MPS)
|
163 |
-
AutoTokenizer = libs["AutoTokenizer"]
|
164 |
-
AutoModelForCausalLM = libs["AutoModelForCausalLM"]
|
165 |
-
get_peft_model = libs["get_peft_model"]
|
166 |
-
LoraConfig = libs["LoraConfig"]
|
167 |
-
torch = libs["torch"]
|
168 |
-
|
169 |
-
tokenizer = AutoTokenizer.from_pretrained(args.model_id, use_fast=True, trust_remote_code=True)
|
170 |
-
# Prefer MPS on Apple Silicon if available
|
171 |
-
use_mps = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
|
172 |
-
if not use_mps:
|
173 |
-
if args.use_fp16:
|
174 |
-
dtype = torch.float16
|
175 |
-
elif args.use_bf16:
|
176 |
-
dtype = torch.bfloat16
|
177 |
-
else:
|
178 |
-
dtype = torch.float32
|
179 |
else:
|
180 |
dtype = torch.float32
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
|
|
|
|
198 |
|
199 |
# Load dataset
|
200 |
print(f"[train] Loading dataset: {args.dataset}")
|
@@ -229,29 +183,37 @@ def main():
|
|
229 |
|
230 |
ds = ds.map(map_fn, remove_columns=[c for c in ds.column_names if c != "text"])
|
231 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
232 |
# Trainer
|
233 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
234 |
model=model,
|
|
|
|
|
235 |
tokenizer=tokenizer,
|
236 |
-
train_dataset=ds,
|
237 |
-
max_seq_length=args.cutoff_len,
|
238 |
-
dataset_text_field="text",
|
239 |
-
packing=True,
|
240 |
-
args=SFTConfig(
|
241 |
-
output_dir=str(out_dir / "hf"),
|
242 |
-
per_device_train_batch_size=args.batch_size,
|
243 |
-
gradient_accumulation_steps=args.gradient_accumulation,
|
244 |
-
learning_rate=args.lr,
|
245 |
-
num_train_epochs=args.epochs,
|
246 |
-
max_steps=args.max_steps if args.max_steps else -1,
|
247 |
-
logging_steps=10,
|
248 |
-
save_steps=200,
|
249 |
-
save_total_limit=2,
|
250 |
-
bf16=args.use_bf16,
|
251 |
-
fp16=args.use_fp16,
|
252 |
-
seed=args.seed,
|
253 |
-
report_to=[],
|
254 |
-
),
|
255 |
)
|
256 |
|
257 |
print("[train] Starting training...")
|
@@ -259,20 +221,18 @@ def main():
|
|
259 |
print("[train] Saving adapter...")
|
260 |
adapter_path = out_dir / "adapter"
|
261 |
adapter_path.mkdir(parents=True, exist_ok=True)
|
262 |
-
# Save adapter-only weights if PEFT; Unsloth path is also PEFT-compatible
|
263 |
try:
|
264 |
-
# Primary model saving logic
|
265 |
model.save_pretrained(str(adapter_path))
|
266 |
except Exception as e:
|
267 |
-
logger.error("Error during
|
268 |
-
try:
|
269 |
-
# Fallback model saving logic
|
270 |
-
model.base_model.save_pretrained(str(adapter_path)) # type: ignore[attr-defined]
|
271 |
-
except Exception as fallback_e:
|
272 |
-
logger.error("Fallback model saving failed: %s", fallback_e, exc_info=True) # type: ignore
|
273 |
-
pass # Optionally re-raise or handle accordingly
|
274 |
tokenizer.save_pretrained(str(adapter_path))
|
275 |
|
|
|
|
|
|
|
|
|
|
|
|
|
276 |
# Write done file
|
277 |
(out_dir / "DONE").write_text("ok")
|
278 |
elapsed = time.time() - start
|
|
|
28 |
If mode=="hf": AutoTokenizer, AutoModelForCausalLM, get_peft_model, LoraConfig, torch
|
29 |
"""
|
30 |
# Avoid heavy optional deps on macOS (no xformers/bitsandbytes)
|
|
|
|
|
31 |
from datasets import load_dataset
|
32 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
|
33 |
+
from peft import get_peft_model, LoraConfig
|
34 |
+
import torch
|
35 |
+
return {
|
36 |
+
"load_dataset": load_dataset,
|
37 |
+
"AutoTokenizer": AutoTokenizer,
|
38 |
+
"AutoModelForCausalLM": AutoModelForCausalLM,
|
39 |
+
"get_peft_model": get_peft_model,
|
40 |
+
"LoraConfig": LoraConfig,
|
41 |
+
"Trainer": Trainer,
|
42 |
+
"TrainingArguments": TrainingArguments,
|
43 |
+
"torch": torch,
|
44 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
|
47 |
def parse_args():
|
|
|
65 |
p.add_argument("--use-fp16", dest="use_fp16", action="store_true")
|
66 |
p.add_argument("--seed", type=int, default=42)
|
67 |
p.add_argument("--dry-run", dest="dry_run", action="store_true", help="Write DONE and exit without training (for CI)")
|
68 |
+
p.add_argument("--grpo", dest="use_grpo", action="store_true", help="Enable GRPO (if supported by Unsloth)")
|
69 |
+
p.add_argument("--cpt", dest="use_cpt", action="store_true", help="Enable CPT (if supported by Unsloth)")
|
70 |
+
p.add_argument("--export-gguf", dest="export_gguf", action="store_true", help="Export model to GGUF Q4_K_XL after training")
|
71 |
+
p.add_argument("--gguf-out", dest="gguf_out", default=None, help="Path to save GGUF file (if exporting)")
|
72 |
return p.parse_args()
|
73 |
|
74 |
|
|
|
109 |
# Training imports (supports Unsloth fast path and HF fallback)
|
110 |
libs: Dict[str, Any] = _import_training_libs()
|
111 |
load_dataset = libs["load_dataset"]
|
112 |
+
AutoTokenizer = libs["AutoTokenizer"]
|
113 |
+
AutoModelForCausalLM = libs["AutoModelForCausalLM"]
|
114 |
+
get_peft_model = libs["get_peft_model"]
|
115 |
+
LoraConfig = libs["LoraConfig"]
|
116 |
+
Trainer = libs["Trainer"]
|
117 |
+
TrainingArguments = libs["TrainingArguments"]
|
118 |
+
torch = libs["torch"]
|
119 |
|
|
|
120 |
os.environ.setdefault("PYTORCH_CUDA_ALLOC_CONF", "expandable_segments:True")
|
121 |
os.environ.setdefault("TOKENIZERS_PARALLELISM", "false")
|
122 |
|
123 |
print(f"[train] Loading base model: {args.model_id}")
|
124 |
+
tokenizer = AutoTokenizer.from_pretrained(args.model_id, use_fast=True, trust_remote_code=True)
|
125 |
+
use_mps = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
|
126 |
+
if not use_mps:
|
127 |
+
if args.use_fp16:
|
128 |
+
dtype = torch.float16
|
129 |
+
elif args.use_bf16:
|
130 |
+
dtype = torch.bfloat16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
131 |
else:
|
132 |
dtype = torch.float32
|
133 |
+
else:
|
134 |
+
dtype = torch.float32
|
135 |
+
model = AutoModelForCausalLM.from_pretrained(
|
136 |
+
args.model_id,
|
137 |
+
torch_dtype=dtype,
|
138 |
+
trust_remote_code=True,
|
139 |
+
)
|
140 |
+
if use_mps:
|
141 |
+
model.to("mps")
|
142 |
+
print("[train] Attaching LoRA adapter (PEFT)")
|
143 |
+
lora_config = LoraConfig(
|
144 |
+
r=args.lora_r,
|
145 |
+
lora_alpha=args.lora_alpha,
|
146 |
+
target_modules=["q_proj","k_proj","v_proj","o_proj","gate_proj","up_proj","down_proj"],
|
147 |
+
lora_dropout=0.0,
|
148 |
+
bias="none",
|
149 |
+
task_type="CAUSAL_LM",
|
150 |
+
)
|
151 |
+
model = get_peft_model(model, lora_config)
|
152 |
|
153 |
# Load dataset
|
154 |
print(f"[train] Loading dataset: {args.dataset}")
|
|
|
183 |
|
184 |
ds = ds.map(map_fn, remove_columns=[c for c in ds.column_names if c != "text"])
|
185 |
|
186 |
+
# Tokenize dataset
|
187 |
+
def tokenize_fn(ex):
|
188 |
+
return tokenizer(
|
189 |
+
ex["text"],
|
190 |
+
truncation=True,
|
191 |
+
max_length=args.cutoff_len,
|
192 |
+
padding="max_length",
|
193 |
+
)
|
194 |
+
tokenized_ds = ds.map(tokenize_fn, batched=True)
|
195 |
+
|
196 |
# Trainer
|
197 |
+
training_args = TrainingArguments(
|
198 |
+
output_dir=str(out_dir / "hf"),
|
199 |
+
per_device_train_batch_size=args.batch_size,
|
200 |
+
gradient_accumulation_steps=args.gradient_accumulation,
|
201 |
+
learning_rate=args.lr,
|
202 |
+
num_train_epochs=args.epochs,
|
203 |
+
max_steps=args.max_steps if args.max_steps else -1,
|
204 |
+
logging_steps=10,
|
205 |
+
save_steps=200,
|
206 |
+
save_total_limit=2,
|
207 |
+
bf16=args.use_bf16,
|
208 |
+
fp16=args.use_fp16,
|
209 |
+
seed=args.seed,
|
210 |
+
report_to=[],
|
211 |
+
)
|
212 |
+
trainer = Trainer(
|
213 |
model=model,
|
214 |
+
args=training_args,
|
215 |
+
train_dataset=tokenized_ds,
|
216 |
tokenizer=tokenizer,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
217 |
)
|
218 |
|
219 |
print("[train] Starting training...")
|
|
|
221 |
print("[train] Saving adapter...")
|
222 |
adapter_path = out_dir / "adapter"
|
223 |
adapter_path.mkdir(parents=True, exist_ok=True)
|
|
|
224 |
try:
|
|
|
225 |
model.save_pretrained(str(adapter_path))
|
226 |
except Exception as e:
|
227 |
+
logger.error("Error during model saving: %s", e, exc_info=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
228 |
tokenizer.save_pretrained(str(adapter_path))
|
229 |
|
230 |
+
# Optionally export to GGUF Q4_K_XL
|
231 |
+
if args.export_gguf:
|
232 |
+
print("[train] Export to GGUF is not supported in Hugging Face-only mode. Use llama.cpp's convert-hf-to-gguf.py after training.")
|
233 |
+
gguf_path = args.gguf_out or str(out_dir / "adapter-gguf-q4_k_xl")
|
234 |
+
print(f"python convert-hf-to-gguf.py --outtype q4_k_xl --outfile {gguf_path} {adapter_path}")
|
235 |
+
|
236 |
# Write done file
|
237 |
(out_dir / "DONE").write_text("ok")
|
238 |
elapsed = time.time() - start
|
training_runs/devlocal/meta.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
{
|
2 |
"job_id": "devlocal",
|
3 |
-
"model_id": "unsloth/gemma-
|
4 |
-
"dataset": "sample_data/
|
5 |
-
"created_at":
|
6 |
-
}
|
|
|
1 |
{
|
2 |
"job_id": "devlocal",
|
3 |
+
"model_id": "unsloth/gemma-2b",
|
4 |
+
"dataset": "sample_data/mini_test.jsonl",
|
5 |
+
"created_at": 1754645651
|
6 |
+
}
|
training_runs/realtrain/DONE
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
dry_run
|
training_runs/realtrain/meta.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"job_id": "realtrain",
|
3 |
+
"model_id": "unsloth/gemma-3n-E4B-it",
|
4 |
+
"dataset": "sample_data/mini_test.jsonl",
|
5 |
+
"created_at": 1754644903
|
6 |
+
}
|
training_runs/testload/DONE
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
dry_run
|
training_runs/testload/meta.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"job_id": "testload",
|
3 |
+
"model_id": "unsloth/gemma-3n-E4B-it",
|
4 |
+
"dataset": "sample_data/mini_test.jsonl",
|
5 |
+
"created_at": 1754643124
|
6 |
+
}
|