llama7b_2bit_128g / README.md
Facico's picture
Update README.md
b2b8e3c
|
raw
history blame
774 Bytes
---
license: gpl-3.0
---
### 2-bit quantization and 128 groupsize for LLaMA 7B
This is a Chinese instruction-tuning lora checkpoint based on llama-13B from [this repo's](https://github.com/Facico/Chinese-Vicuna) work
Consumes approximately 4G of graphics memory
```text
"input":the mean of life is
"output":the mean of life is a good., and it’s not to be worth in your own homework for an individual who traveling on my back with me our localities that you can do some work at this point as well known by us online gaming sites are more than 10 years old when i was going out there around here we had been written about his time were over all sited down after being spent from most days while reading between two weeks since I would have gone before its age site;...
```