Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,8 @@ datasets:
|
|
18 |
---
|
19 |
Due to Huggingface's new maximum storage limit I might not be able to upload the final trained version for a while.
|
20 |
|
|
|
|
|
21 |
(Updated to 2500th step)
|
22 |
So this is only the 2500th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.
|
23 |
|
|
|
18 |
---
|
19 |
Due to Huggingface's new maximum storage limit I might not be able to upload the final trained version for a while.
|
20 |
|
21 |
+
In case I can't upload it here you can check out this site [models.minipasila.net](https://models.minipasila.net/).
|
22 |
+
|
23 |
(Updated to 2500th step)
|
24 |
So this is only the 2500th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.
|
25 |
|