Commit History
allow the sharegpt handler to also better handle datasets destined for openai finetuning (#1361)
2598c9f
unverified
winglian
commited on
lora+ support (#1352)
decb66e
unverified
winglian
commited on
plain input/output prompt strategy w/o chat templates (#1346)
4d09b42
unverified
winglian
commited on
Fix validation for early stopping (#1358)
b5b4492
unverified
chiragjn
commited on
fix for protected model_ namespace w pydantic (#1345)
6b3b271
unverified
winglian
commited on
Fix `use_mlflow` to be bool instead of str (#1344)
3a5a2d2
unverified
chiragjn
commited on
more fixes 20240228 (#1342) [skip ci]
0f985e1
unverified
winglian
commited on
add gemma instruct chat template (#1341)
c1a7b3d
unverified
winglian
commited on
Update fastchat_conversation_turns.py (#1294) [skip ci]
2b9687f
unverified
eltociear
commited on
fix steps check for anneal on first cycle (#1316)
2c9c88b
unverified
winglian
commited on
more pydantic fixes (#1338)
3f69571
unverified
winglian
commited on
Support user-defined prompt processing strategies for dpo (#1248)
1e3d530
unverified
add lion-pytorch optimizer (#1299) [skip ci]
1648279
unverified
hotfix to exclude_unset from pydantic config when converting back to a dict (#1334)
269c543
unverified
winglian
commited on
hotfix for missing outputs params (#1333)
e7eed20
unverified
winglian
commited on
hotfix for lora rank (#1332)
cf00231
unverified
winglian
commited on
hotfix for capabilities loading (#1331)
7de912e
unverified
winglian
commited on
Pydantic 2.x cfg (#1239)
cc3cebf
unverified
winglian
commited on
make mlflow optional (#1317)
5894f0e
unverified
winglian
commited on
multipack for gemma (#1313)
2752d5f
unverified
winglian
commited on
Allow load_best_model_at_end to be configured for early stopping on custom evaluation datasets (#1291)
3c00f40
unverified
David Meikle
commited on
Validation always happens on first step (#1300)
e2786cc
unverified
LeonardoEmili
commited on
Add seq2seq eval benchmark callback (#1274)
5a5d474
unverified
LeonardoEmili
commited on
Scheduler implementation of Continual Pre-Training of Large Language Models: How to (re)warm your model? (#1273)
8430db2
unverified
jinwonkim93
commited on
allow the optimizer prune ratio for ReLoRA to be configurable (#1287)
4b997c3
unverified
winglian
commited on
Add MPS support (#1264)
fac2d98
unverified
don't use load and push together (#1284)
ea00dd0
unverified
winglian
commited on
add support for https remote yamls (#1277)
9bca7db
unverified
hamel
commited on
allow remote data paths (#1278)
91cf4ee
unverified
hamel
commited on
simplify haldning for newer multipack patches so they can be added in a single place (#1270)
5698943
unverified
winglian
commited on
Fix bug preventing model_kwargs being injected (#1262)
73f1bda
unverified
Zac Brannelly
commited on
Add more save strategies for DPO training. (#1255)
13eea21
unverified
Philip May
commited on
Fix typo `bloat16` -> `bfloat16` (#1257)
1072f28
unverified
chiragjn
commited on
Pretrain transforms (#1261)
c7cf381
unverified
winglian
commited on
relora: magnitude pruning of the optimizer (#1245)
8c2e05a
unverified
winglian
commited on
support for true batches with multipack (#1230)
00568c1
unverified
winglian
commited on
Peft deepspeed resume (#1227)
c67fb71
unverified
winglian
commited on
Support for additional_special_tokens (#1221) [skip ci]
25e037f
unverified
Fix and document test_datasets (#1228)
5787e1a
unverified
Fix typo (#1231) [skip ci]
8608d80
unverified
xhedit
commited on
Peft lotfq (#1222)
4cb7900
unverified
winglian
commited on
FEAT: add tagging support to axolotl for DPOTrainer (#1209)
18f8119
unverified
Revert "run PR e2e docker CI tests in Modal" (#1220) [skip ci]
8da1633
unverified
winglian
commited on
run PR e2e docker CI tests in Modal (#1217) [skip ci]
36d053f
unverified
winglian
commited on
ADD: warning if hub_model_id ist set but not any save strategy (#1202)
af29d81
unverified
Respect sliding_window=None (#1214)
62ca4a2
unverified
DreamGenX
commited on
more checks and fixes for deepspeed and fsdp (#1208) [skip ci]
e923e62
unverified
winglian
commited on