eralFlare commited on
Commit
e5c011d
1 Parent(s): 6843232

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -19,7 +19,7 @@ license_link: https://ai.google.dev/gemma/terms
19
 
20
  # Gemma Model Card
21
  This model card is copied from the original [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) with edits to the code snippets on how to run this auto-gptq quantized version of the model.
22
- This auto-gptq quantized version of the model had only been tested to work on cuda GPU. This quantized model utilise approximately 2.6GB of VRAM.
23
 
24
  **Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
25
 
 
19
 
20
  # Gemma Model Card
21
  This model card is copied from the original [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) with edits to the code snippets on how to run this auto-gptq quantized version of the model.
22
+ This auto-gptq quantized version of the model had only been tested to work on cuda GPU and utilise approximately 2.6GB of VRAM.
23
 
24
  **Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
25