Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<div style="width: 100%;">
|
@@ -77,3 +79,27 @@ Donaters will get priority support on any and all AI/LLM/model questions, and I'
|
|
77 |
* Discord: https://discord.gg/UBgz4VXf
|
78 |
|
79 |
# Original model card: Aeala's VicUnlocked Alpaca 65B QLoRA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
datasets:
|
5 |
+
- gozfarb/ShareGPT_Vicuna_unfiltered
|
6 |
---
|
7 |
|
8 |
<div style="width: 100%;">
|
|
|
79 |
* Discord: https://discord.gg/UBgz4VXf
|
80 |
|
81 |
# Original model card: Aeala's VicUnlocked Alpaca 65B QLoRA
|
82 |
+
|
83 |
+
|
84 |
+
## LoRA Info:
|
85 |
+
Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is paused for now. Feel free to try it!~
|
86 |
+
|
87 |
+
**Important Note**: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the *Alpaca* format, so prompting should be something like:
|
88 |
+
|
89 |
+
```
|
90 |
+
### Instruction:
|
91 |
+
|
92 |
+
<prompt> (without the <>)
|
93 |
+
|
94 |
+
### Response:
|
95 |
+
```
|
96 |
+
|
97 |
+
Current upload: checkpoint of step 1200 in training.
|
98 |
+
|
99 |
+
|
100 |
+
## Benchmarks
|
101 |
+
**wikitext2:** Coming soon...
|
102 |
+
|
103 |
+
**ptb-new:** Coming soon...
|
104 |
+
|
105 |
+
**c4-new:** Coming soon...
|