Maxime
Add MPS support (#1264)
fac2d98
unverified
-
cerebras
Update qlora.yml - remove `max_packed_sequence_len` (#1210) [skip ci]
-
code-llama
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
colab-notebooks
lock pytorch (#1247) [skip ci]
-
falcon
Falcon embeddings (#1149) [skip docker]
-
gptj
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
jeopardy-bot
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
llama-2
Peft lotfq (#1222)
-
mamba
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
mistral
Mixtral fixes 20240124 (#1192) [skip ci]
-
mpt-7b
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
openllama-3b
Add shifted sparse attention (#973) [skip-ci]
-
phi
Mixtral fixes 20240124 (#1192) [skip ci]
-
pythia-12b
Feat(wandb): Refactor to be more flexible (#767)
-
pythia
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
qwen
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
redpajama
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
replit-3b
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
tiny-llama
Add MPS support (#1264)
-
xgen-7b
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]
-
yi-34B-chat
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci]