File size: 1,385 Bytes
d222329 b27d0ce d222329 b27d0ce d222329 3920b6f e8ace18 3920b6f a17e7b2 1ceedd3 a17e7b2 1ceedd3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- dumb-dev/cpp-10k
- dumb-dev/Encoding-Detection-w-cChardet-DB
- Neloy262/rust_instruction_dataset
- m-a-p/CodeFeedback-Filtered-Instruction
- sahil2801/CodeAlpaca-20k
- vicgalle/alpaca-gpt4
language:
- en
---
# I finetuned TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the following datasets:
- dumb-dev/cpp-10k
- dumb-dev/Encoding-Detection-w-cChardet-DB
- Neloy262/rust_instruction_dataset
- m-a-p/CodeFeedback-Filtered-Instruction
- sahil2801/CodeAlpaca-20k
- vicgalle/alpaca-gpt4
## Their LORAs can be found [here](https://huggingface.co/dumb-dev/TinyLlama-1.1B-Chat-rust-cpp-encodings/tree/main/LORAs)
In the final model only the 1e-4 LORAs have been used!
Everything was trained a total of 2 epochs.
### probably the reason why it works this bad:
Following 3 are fp16, the other ones are fp32:
1. [this](https://huggingface.co/dumb-dev/TinyLlama-1.1B-Chat-rust-cpp-encodings/tree/main/LORAs/300mb-DB-CodeFeedback-Tinyllama)
2. [this](https://huggingface.co/dumb-dev/TinyLlama-1.1B-Chat-rust-cpp-encodings/tree/main/LORAs/tinyllama-rust)
3. [this](https://huggingface.co/dumb-dev/TinyLlama-1.1B-Chat-rust-cpp-encodings/tree/main/LORAs/tinyllama-cpp)
# If someone knows how to improve, please let me know.
Instagram: dev2care |