Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Spaces:
Duplicated from
lxe/simple-llm-finetuner
Pito8
/
simple-llm-finetuner8
like
0
Build error
App
Files
Files
Community
bc97c87
simple-llm-finetuner8
4 contributors
History:
38 commits
VadimP
Clarify that 16GB VRAM in itself is enough (#21)
bc97c87
unverified
over 1 year ago
example-datasets
Changed device_map to force GPU, see #6, https://github.com/tloen/alpaca-lora/issues/21
over 1 year ago
.gitattributes
1.48 kB
Add .gitattributes for spaces
over 1 year ago
.gitignore
101 Bytes
Refactor; fix model/lora loading/reloading in inference. Fixes #10, #6
over 1 year ago
Inference.ipynb
4.7 kB
Refactor; fix model/lora loading/reloading in inference. Fixes #10, #6
over 1 year ago
README.md
4.62 kB
Clarify that 16GB VRAM in itself is enough (#21)
over 1 year ago
Simple_LLaMA_FineTuner.ipynb
21.4 kB
Added ipynb and another example
over 1 year ago
main.py
17.1 kB
Added huggingface spaces stuff
over 1 year ago
requirements.txt
158 Bytes
Update requirements.txt
over 1 year ago