Commit History
Merge pull request #65 from NanoCode012/feat/target-linear
ec3c031
unverified
Nanobit
commited on
Fix recommendation condition
fe0e69f
unverified
Nanobit
commited on
load the tokenizer seperately from the model
32e6fe9
winglian
commited on
Add cfg.lora_target_linear
9196237
Nanobit
commited on
add logging and make sure model unloads to float16
a5bf838
winglian
commited on
update readme and add typehints
a4f1241
winglian
commited on
fix validation for qlora merge
48f4c05
winglian
commited on
qlora and 4bit check so we are able to merge and unload
1987e5c
winglian
commited on
fix merge conflict failure, black format
7b5e762
winglian
commited on
fixes to make qlora actually work
34c99f9
winglian
commited on
another fix for shard and train split
2e56203
winglian
commited on
shard fix
ac79360
winglian
commited on
missed ...
943961f
winglian
commited on
change auth token setting back
d2a6f79
winglian
commited on
fix tokenizer loading, got openllama 3b working
e396654
winglian
commited on
fixes w/ example for super basic lora starter
a5d739b
winglian
commited on
Merge pull request #55 from OpenAccess-AI-Collective/missing-validation-file
de2a733
unverified
winglian
commited on
add missing file
1d7da3b
winglian
commited on
stray s
f523a08
winglian
commited on
cfg.cfg fix, also de-dupe lora module list
676d7da
winglian
commited on
fix tuple add to list
a8771b0
winglian
commited on
attempt to find linear modules for qlora
ffd1043
winglian
commited on
apply black formatting
ce34d64
winglian
commited on
Merge branch 'main' of github.com:OpenAccess-AI-Collective/axolotl into dev
ce694e2
winglian
commited on
remove un-needed code, add validation
1f5d83e
winglian
commited on
fix: handles AutoTokenizer from untrusted source
88ad05d
unverified
Valentin De Matos
commited on
more qlora support
e8aacfb
winglian
commited on
prepare does all this already for qlora?
b9d07aa
winglian
commited on
integrate qlora? maybe?
3b4d055
winglian
commited on
fix missing fp16 kwarg
2ae936f
winglian
commited on
fix enum pass as value
fb100a9
winglian
commited on
Add qa style data for alpaca instructions, fix one_cycle scheduler
3a50377
winglian
commited on
don't need to set here
de6da13
winglian
commited on
be able to use adam bnb 8bit and one cycle scheduler w fsdp
9493b1b
winglian
commited on
make sure to use train split if loading from hf
607a4d3
winglian
commited on
make one cycle lr div factor configurable
99383f1
winglian
commited on
fix new dataset prompt tokenizers
0f74464
winglian
commited on
add missing __init__
e0602a9
winglian
commited on
pygmalion dataset prompts format, cached tokenized datasets should be hashed on the tokenizer too
2809f3f
winglian
commited on
tokenization fixes
4ea9a66
winglian
commited on
optionally be able to specify alpaca or chat style prompts
1d5ab84
winglian
commited on
Set `half` using `cfg.fp16` for 4bit
641f801
unverified
Nanobit
commited on
concise multiple choice and tldr summarize
1365073
winglian
commited on
support for replit lm
8c2f3cb
winglian
commited on