Aeala's picture
Update README.md
9658e35
|
raw
history blame
675 Bytes
---
datasets:
- Aeala/ShareGPT_Vicuna_unfiltered
---
## LoRA Info:
Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is paused for now. Feel free to try it!~
**Important Note**: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the *Alpaca* format, so prompting should be something like:
```
### Instruction:
<prompt> (without the <>)
### Response:
```
Current upload: checkpoint of a retrain at ~1000 steps with fixed QLoRA repo. (**6/4/2023**)
## Benchmarks
**wikitext2:** Coming soon...
**ptb-new:** Coming soon...
**c4-new:** Coming soon...