fix(model): apply gate fp32 only for mixtral (#1241) 2d65f47 unverified Nanobit winglian commited on Feb 1
add contact info for dedicated support for axolotl [skip ci] (#1243) dfd1885 unverified winglian commited on Feb 1
Support for additional_special_tokens (#1221) [skip ci] 25e037f unverified DreamGenX winglian commited on Jan 31
set torch version to what is installed during axolotl install (#1234) 8f2b591 unverified winglian commited on Jan 31
FEAT: add tagging support to axolotl for DPOTrainer (#1209) 18f8119 unverified Filippo Broggini winglian commited on Jan 27
Revert "run PR e2e docker CI tests in Modal" (#1220) [skip ci] 8da1633 unverified winglian commited on Jan 26
ADD: warning if hub_model_id ist set but not any save strategy (#1202) af29d81 unverified JohanWork winglian commited on Jan 26
ensure the tests use the same version of torch as the latest base docker images (#1215) [skip ci] 1b18003 unverified winglian commited on Jan 26
Update qlora.yml - remove `max_packed_sequence_len` (#1210) [skip ci] 5407ddd unverified 7flash commited on Jan 26
drop py39 docker images, add py311, upgrade pytorch to 2.1.2 (#1205) 74c72ca unverified winglian commited on Jan 26
more checks and fixes for deepspeed and fsdp (#1208) [skip ci] e923e62 unverified winglian commited on Jan 26
workaround for transformers bug requireing do_sample for saveing pretrained (#1206) ba944e6 unverified winglian commited on Jan 25
make sure to register the base chatml template even if no system message is provided (#1207) badda37 unverified winglian commited on Jan 25
precompute dpo logprobs setting and fixes (#1199) [skip ci] 33e1170 unverified winglian commited on Jan 25
fix learning rate scheduler's warnings (#1135) [skip ci] b4ac96a unverified ricdomolm winglian commited on Jan 25
Feat/chatml add system message (#1117) 98b4762 unverified mhenrichsen Mads Henrichsen winglian commited on Jan 25
fix(log): improve warning to clarify that lora_modules_to_save expect a list (#1197) 08719b9 unverified Nanobit commited on Jan 25
Standardize system prompt format for AlpacaPrompter (#1190) [skip ci] af02430 unverified Oleh Kuznetsov commited on Jan 24
upgrade deepspeed to 0.13.1 for mixtral fixes (#1189) [skip ci] 8a49309 unverified winglian commited on Jan 24
more dpo fixes for dataset loading and docs (#1185) [skip ci] 5bce45f unverified winglian commited on Jan 24
Fix generation_config validation raises Exception for do_merge_lora (#1184) 02f2c72 unverified tisorlawan commited on Jan 24
Add support for offline mode with HF_HUB_OFFLINE envvar (#1182) 71141de unverified James Wade winglian commited on Jan 24
don't fail if can't cast weights due to offload when merging (#1172) [skip ci] fb7f9b9 unverified winglian commited on Jan 23
Fine-Tuning Mistral-7b for Real-World Chatbot Applications Using Axolotl (Lora used) (#1155) cc25039 unverified Tilemachos Chatzipapas twenty8th winglian commited on Jan 23
Feat(test): Add tests for alpaca chatml prompt tokenizer (#1088) 5439707 unverified JohanWork Nanobit commited on Jan 23