skip the gpu memory checks if the device is set to 'auto' (#609) 196ff11 unverified winglian commited on Sep 21, 2023
improve handling for empty text on the tokenization step (#502) 1eebbd0 unverified winglian commited on Sep 19, 2023
btlm and falcon monkey patches for flash attn (#566) 6b9b229 unverified winglian commited on Sep 17, 2023
Feat(data): Allow loading local csv and text (#594) 00dce35 unverified Nanobit commited on Sep 17, 2023
gather/broadcast the max value of the packing efficiency automatically (#463) b15b19e unverified winglian commited on Sep 17, 2023
optionally configure sample packing for evals (#589) 21ec195 unverified winglian commited on Sep 16, 2023
set fsdp state dict (#584) be75668 unverified Jan Philipp Harries Jan Philipp Harries commited on Sep 15, 2023
don't resize embeddings if it's already large enough (#577) 3607882 unverified winglian commited on Sep 15, 2023
support custom field for completion from yml (#580) f7a2263 unverified winglian commited on Sep 15, 2023
prevent cli functions from getting fired on import (#581) 8dcd40a unverified winglian commited on Sep 15, 2023
refactor scripts/finetune.py into new cli modules (#550) 861ceca unverified winglian Nanobit commited on Sep 15, 2023
remove columns after tokenizing for pretraining (#571) 1157950 unverified winglian commited on Sep 14, 2023
fix save_steps so it doesn't get duplicated (#567) 3fbde76 unverified winglian commited on Sep 14, 2023
improve how we setup eval/save strategies and steps (#547) 36e53c7 unverified winglian commited on Sep 13, 2023
gracefully handle length feature used for group by (#565) e7aa7b1 unverified winglian commited on Sep 13, 2023
Add training callback to send predictions to WandB table (#521) 5b67ea9 unverified Glavin001 commited on Sep 13, 2023
Fix pretraining with iterable/streaming Dataset (#556) 2f586d1 unverified Jan Philipp Harries Jan Philipp Harries commited on Sep 13, 2023
recommend padding when using sample packing (#531) 3437149 unverified winglian commited on Sep 6, 2023
Add support for GPTQ using native transformers/peft (#468) 3355706 unverified winglian commited on Sep 5, 2023
Merge pull request #520 from bdashore3/sharegpt-fixes daa4fac unverified mhenrichsen commited on Sep 5, 2023
move is_llama_derived_model into normalize_config (#524) 44454ae unverified tmm1 commited on Sep 4, 2023
Debug tokenization output: Add ability to output text only (no tokens), and/or specify num samples to see (#511) 48434be unverified Tom Jobbins commited on Aug 31, 2023