jaspercatapang commited on
Commit
c091ae8
1 Parent(s): 4f7504f

Update README.md to include TheBloke's contributions

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  pipeline_tag: text-generation
3
- license: cc-by-sa-4.0
4
  inference: false
5
  tags:
6
  - merge
@@ -14,7 +14,7 @@ datasets:
14
  Released August 11, 2023
15
 
16
  ## Model Description
17
- GodziLLa 2 70B is an experimental combination of various proprietary LoRAs from Maya Philippines and [Guanaco LLaMA 2 1K dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k), with LLaMA 2 70B. This model's primary purpose is to stress test the limitations of composite, instruction-following LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This model debuted in the leaderboard at rank #4 (August 17, 2023).
18
  ![Godzilla Happy GIF](https://i.pinimg.com/originals/81/3a/e0/813ae09a30f0bc44130cd2c834fe2eba.gif)
19
 
20
  ## Open LLM Leaderboard Metrics
@@ -93,6 +93,9 @@ python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/God
93
  When using GodziLLa 2 70B, kindly take note of the following:
94
  - The default precision is `fp32`, and the total file size that would be loaded onto the RAM/VRAM is around 275 GB. Consider using a lower precision (fp16, int8, int4) to save memory.
95
  - To further save on memory, set the `low_cpu_mem_usage` argument to True.
 
 
 
96
 
97
  ## Ethical Considerations
98
  When using GodziLLa 2 70B, it is important to consider the following ethical considerations:
@@ -114,4 +117,4 @@ For additional information or inquiries about GodziLLa 2 70B, please contact the
114
  GodziLLa 2 70B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
115
 
116
  ## Acknowledgments
117
- The development of GodziLLa 2 70B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters. Special thanks to mlabonne for the Guanaco dataset found [here](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k).
 
1
  ---
2
  pipeline_tag: text-generation
3
+ license: llama2
4
  inference: false
5
  tags:
6
  - merge
 
14
  Released August 11, 2023
15
 
16
  ## Model Description
17
+ GodziLLa 2 70B is an experimental combination of various proprietary LoRAs from Maya Philippines and [Guanaco LLaMA 2 1K dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k), with LLaMA 2 70B. This model's primary purpose is to stress test the limitations of composite, instruction-following LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This model debuted in the leaderboard at rank #4 (August 17, 2023) and operates under the Llama 2 license.
18
  ![Godzilla Happy GIF](https://i.pinimg.com/originals/81/3a/e0/813ae09a30f0bc44130cd2c834fe2eba.gif)
19
 
20
  ## Open LLM Leaderboard Metrics
 
93
  When using GodziLLa 2 70B, kindly take note of the following:
94
  - The default precision is `fp32`, and the total file size that would be loaded onto the RAM/VRAM is around 275 GB. Consider using a lower precision (fp16, int8, int4) to save memory.
95
  - To further save on memory, set the `low_cpu_mem_usage` argument to True.
96
+ - If you wish to use a quantized version of GodziLLa2-70B, you can either access TheBloke's [GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ) or [GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML) version of GodziLLa2-70B.
97
+ - [GodziLLa2-70B-GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ#description) is available in 4-bit and 3-bit
98
+ - [GodziLLa2-70B-GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML#provided-files) is available in 8-bit, 6-bit, 5-bit, 4-bit, 3-bit, and 2-bit
99
 
100
  ## Ethical Considerations
101
  When using GodziLLa 2 70B, it is important to consider the following ethical considerations:
 
117
  GodziLLa 2 70B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
118
 
119
  ## Acknowledgments
120
+ The development of GodziLLa 2 70B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters. Special thanks to mlabonne for the Guanaco dataset found [here](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k). Last but not least, huge thanks to [TheBloke](https://huggingface.co/TheBloke) for the quantized models, making our model easily accessible to a wider community.