qwerrwe / examples /code-llama

Commit History

set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
782b6a4
unverified

winglian Nanobit commited on

Add shifted sparse attention (#973) [skip-ci]
1d70f24
unverified

jrc joecummings winglian commited on

new evals_per_epoch and saves_per_epoch to make things cleaner (#944)
5f79b82
unverified

winglian commited on

Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified

Nanobit commited on

don't compile deepspeed or bitsandbytes from source (#837)
f544ab2
unverified

winglian commited on

fix eval_steps to be a sane default (#797)
8b79ff0
unverified

winglian commited on

simplify by removing duplicate base_model_config (#772)
2d8def6
unverified

winglian commited on

prepared dataset caching, other misc fixes (#665)
e50a64e
unverified

winglian commited on

Fix Codellama examples (#582)
1aa4007
unverified

Doan Minh Phuong commited on

recommend padding when using sample packing (#531)
3437149
unverified

winglian commited on

Feat(cfg): Add code-llama configs for all sizes (#479)
3513071
unverified

mhenrichsen mhenrichsen commited on