Commit History
lock pytorch (#1247) [skip ci]
1c7ed26
unverified
JohanWork
commited on
Pretrain transforms (#1261)
c7cf381
unverified
winglian
commited on
Peft lotfq (#1222)
4cb7900
unverified
winglian
commited on
Update qlora.yml - remove `max_packed_sequence_len` (#1210) [skip ci]
5407ddd
unverified
7flash
commited on
add colab example (#1196) [skip ci]
ee0b5f6
unverified
JohanWork
commited on
Mixtral fixes 20240124 (#1192) [skip ci]
54d2ac1
unverified
winglian
commited on
Phi2 multipack (#1173)
814aee6
unverified
winglian
commited on
Fine-Tuning Mistral-7b for Real-World Chatbot Applications Using Axolotl (Lora used) (#1155)
cc25039
unverified
Falcon embeddings (#1149) [skip docker]
e799e08
unverified
winglian
commited on
pin model_revision for phi2 (#1123)
c1b741d
unverified
winglian
commited on
Phi2 rewrite (#1058)
732851f
unverified
winglian
commited on
streaming multipack for pretraining dataset (#959)
553c80f
unverified
fix: lint (#1037)
8ba27f3
unverified
Nanobit
commited on
added tiny llama examples for lora and qlora (#1027)
c75f916
unverified
Tim Dolan
commited on
Set eval_sample_packing to false in mistral config.yaml (#1003)
384b817
unverified
Kevin Sydney
commited on
Add an example config for finetuning a 34B model on a 24GB GPU (#1000)
6ef46f8
unverified
Evan Griffiths
commited on
set output_router_logits for mixtral config: (#995)
628b754
unverified
winglian
commited on
change val size (#992)
93ebec1
unverified
mhenrichsen
commited on
Fix Deepspeed loading (#950)
5ea3aa3
unverified
winglian
commited on
new evals_per_epoch and saves_per_epoch to make things cleaner (#944)
5f79b82
unverified
winglian
commited on
Mixtral official (#942)
7fabc4d
unverified
winglian
commited on
update to latest transformers for mixstral support (#929)
35f9b0f
unverified
winglian
commited on
Mixtral multipack (#928)
68b227a
unverified
winglian
commited on
support for mamba (#915)
40a6362
unverified
winglian
commited on
Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified
Nanobit
commited on
feature: loss watchdog for terminating training runs that are failing (#899)
58ec8b1
unverified
fix: remove FA for qwen examples (#900)
a48dbf6
unverified
Nanobit
commited on
Feat: Add Qwen (#894)
1115c50
unverified
Nanobit
commited on
Phi update 202311 (#876)
9bf854e
unverified
winglian
commited on
various bugfixes (#856)
1470650
unverified
winglian
commited on
don't compile deepspeed or bitsandbytes from source (#837)
f544ab2
unverified
winglian
commited on
fix eval_steps to be a sane default (#797)
8b79ff0
unverified
winglian
commited on
disable eval table w sample packing in examples (#778)
9b43e7e
unverified
winglian
commited on
simplify by removing duplicate base_model_config (#772)
2d8def6
unverified
winglian
commited on
Implement fused modules (#747)
15d3a65
unverified
Fix: lowercase `True` values in config (#713)
ace70b3
unverified
atgctg
commited on
Get qlora mistral-7b fine tuning working on a single 4090 (#708)
295b266
unverified
lukemarsden
commited on
fix unneeded space (#699)
f91db19
unverified
mhenrichsen
commited on
lint
83a950b
unverified
mhenrichsen
commited on
new lr, sample pack
4c8ddf2
mhenrichsen
commited on
Fix: Higher vram usage for mistral and sample_packing (#691)
669f1d0
unverified
Nanobit
commited on
Adding qlora config for Mistral (#675)
d4a88e4
unverified
Abhishek Mishra
commited on
prepared dataset caching, other misc fixes (#665)
e50a64e
unverified
winglian
commited on
Update mistral/README.md (#647)
b88f515
unverified
Adarsh Shirawalmath
commited on
Feat: Add example for Mistral (#644)
eb41f76
unverified
Nanobit
commited on
eval_table isn't quite stable enough to be in default llama configs (#637)
d887ad8
unverified
winglian
commited on
Feat: Add support for upstream FA2 (#626)
19a600a
unverified
Nanobit
commited on