|  |  | 
					
						
						|  |  | 
					
						
						|  | This repository contains quantized variants of the Gemma language model developed by Google. | 
					
						
						|  |  | 
					
						
						|  | * π§  **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms) | 
					
						
						|  | * πͺ **Quantized by:** c516a | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | These quantized models are: | 
					
						
						|  |  | 
					
						
						|  | * Provided under the same terms as the original Google Gemma models. | 
					
						
						|  | * Intended only for **non-commercial use**, **research**, and **experimentation**. | 
					
						
						|  | * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**. | 
					
						
						|  |  | 
					
						
						|  | By using this repository or its contents, you agree to: | 
					
						
						|  |  | 
					
						
						|  | * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms), | 
					
						
						|  | * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google, | 
					
						
						|  | * Acknowledge Google as the original model creator. | 
					
						
						|  |  | 
					
						
						|  | > π’ **Disclaimer:** This repository is not affiliated with Google. | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | All quantized model files are hosted externally for convenience. | 
					
						
						|  | You can download them from: | 
					
						
						|  |  | 
					
						
						|  | π **[https://modelbakery.nincs.net/c516a/projects](https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it)** | 
					
						
						|  |  | 
					
						
						|  | π git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity. | 
					
						
						|  |  | 
					
						
						|  | Example: | 
					
						
						|  |  | 
					
						
						|  | * `codegemma-7b-it.Q4_K_M.gguf` (binary file) | 
					
						
						|  | * `codegemma-7b-it.Q4_K_M.gguf.txt` β contains: | 
					
						
						|  |  | 
					
						
						|  | ``` | 
					
						
						|  | Download: https://modelbakery.nincs.net/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf | 
					
						
						|  | ``` | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups. | 
					
						
						|  |  | 
					
						
						|  | If you find them useful, feel free to star the project or fork it to share improvements! | 
					
						
						|  |  |