Aeala's picture
Update README.md
7a62ac3
|
raw
history blame
637 Bytes
---
datasets:
- gozfarb/ShareGPT_Vicuna_unfiltered
---
## LoRA Info:
Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is paused for now. Feel free to try it!~
**Important Note**: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the *Alpaca* format, so prompting should be something like:
```
### Instruction:
<prompt> (without the <>)
### Response:
```
Current upload: checkpoint of step 1200 in training.
## Benchmarks
**wikitext2:** Coming soon...
**ptb-new:** Coming soon...
**c4-new:** Coming soon...