Add readme
Browse files
README.md
CHANGED
@@ -1,3 +1,22 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Koala: A Dialogue Model for Academic Research
|
2 |
+
This repo contains the weights of the Koala 7B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 7B model.
|
3 |
+
|
4 |
+
This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
5 |
+
|
6 |
+
Quantization command was:
|
7 |
+
```
|
8 |
+
python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt
|
9 |
+
```
|
10 |
+
|
11 |
+
Check out the following links to learn more about the Berkeley Koala model.
|
12 |
+
* [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
|
13 |
+
* [Online demo](https://koala.lmsys.org/)
|
14 |
+
* [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM)
|
15 |
+
* [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md)
|
16 |
+
|
17 |
+
## License
|
18 |
+
The model weights are intended for academic research only, subject to the
|
19 |
+
[model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md),
|
20 |
+
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
|
21 |
+
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
|
22 |
+
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|