Sweaterdog
commited on
Commit
•
ddb2733
1
Parent(s):
4782668
Update README.md
Browse files
README.md
CHANGED
@@ -98,6 +98,10 @@ For Anybody who is wondering what the context length is, for the Hermesv1, they
|
|
98 |
|
99 |
#
|
100 |
|
|
|
|
|
|
|
|
|
101 |
This qwen2, gemma2, and llama3.2 models were trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
102 |
|
103 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
98 |
|
99 |
#
|
100 |
|
101 |
+
I wanted to include the google colab link, in case you wanted to know how to train models via CSV, or use my dataset to train your own model, on your own settings, on a different model. [Google Colab](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS#scrollTo=2eSvM9zX_2d3)
|
102 |
+
|
103 |
+
#
|
104 |
+
|
105 |
This qwen2, gemma2, and llama3.2 models were trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
106 |
|
107 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|