support user defined prompters, pretokenized datasets in config, local parquet, local arrow files (#348) d2e7f27 unverified winglian commited on Aug 20, 2023
use save_strategy from config if available (#434) b3f5e00 unverified winglian commited on Aug 19, 2023
Fix(config): Update handling of deepspeed config (#404) c01015f unverified Nanobit commited on Aug 15, 2023
better handling of empty input ids when tokenizing (#395) 85cf4f8 unverified winglian commited on Aug 15, 2023
use context manager to run things on rank0 before others (#397) fc2d6be unverified winglian commited on Aug 15, 2023
Error msg for sharegpt if conv has less than 2 msg (#379) 63fdb5a unverified flotos commited on Aug 14, 2023
don't pass rope_scaling kwarg if it's None (#383) 919246f unverified winglian commited on Aug 13, 2023
try to detect accelerate and only use device_map=None in that case (#373) 094fc2c unverified tmm1 commited on Aug 13, 2023
Attention mask and position id fixes for packing (#285) 2bb0b78 unverified winglian commited on Aug 12, 2023
Add wandb_entity to wandb options, update example configs, update README (#361) 7019509 unverified Morgan McGuire Morgan McGuire winglian commited on Aug 12, 2023
Fix(model loading): Warn when model revision is passed to gptq (#364) 96bd6ae unverified Nanobit commited on Aug 12, 2023
Fix(message): Improve error message for bad format (#365) e37d935 unverified Nanobit commited on Aug 12, 2023
ensure enable_input_require_grads is called on model before getting the peft model (#345) 176b888 unverified winglian commited on Aug 6, 2023
experimental llama 2 chat support (#296) 3392270 unverified Jan Philipp Harries Jan Philipp Harries commited on Aug 6, 2023
Update XFormers Attention Monkeypatch to handle Llama-2 70B (GQA) (#339) 10405b9 unverified ssmi153 commited on Aug 6, 2023
Added Orca Mini prompt strategy (#263) c93655c unverified Jan Philipp Harries Jan Philipp Harries commited on Aug 5, 2023
optimize the iteration when tokenizeing large datasets (#332) fe28543 unverified winglian commited on Aug 4, 2023
scope flash-attn+qlora fix correctly, scope to llama, add comment 78b9efb tmm1 commited on Aug 3, 2023
ensure flash-attn fixes happen in both adapter/lora modes, and use torch_dtype 248bf90 tmm1 commited on Aug 2, 2023
add peft install back since it doesn't get installed by setup.py (#331) db2a358 unverified winglian commited on Jul 31, 2023
update prompts for open orca to match the paper (#317) 3d4984b unverified winglian commited on Jul 22, 2023