MaziyarPanahi commited on
Commit
1831c2f
1 Parent(s): b3525fa

Upload folder using huggingface_hub (#1)

Browse files

- 1575aed80932503062d63c97db7b8c508fec157caeb9510d12fba16205552eb9 (c0908dc20d6e3c91abda7dcd3b84890cabe76783)
- cb17e85d5ab528a85293a20eedcdcc76dcb576a100ef23d080ac257c6e23197e (db49d2546483a672b7a55c809576c18ed0160a34)
- 00930c810dae5ee21d800001c8eb54d596e44b44c209ac4bd2555816d6563657 (b04600308b11c9728d25620c55872c1d95f1454a)
- 5c5d3d9826daea4f735396576486dcd2d3a51ccb9fee44374656729c87722a78 (bac9332ed54ae5b7eabcae6d771250ae9af8a463)
- 6f35c4732f45b9171ca358d94bcc86339be441d7e69b591b9976d9c45caaf2e6 (36f0f5e7bddffde363df6f042c07d2e1d00ad6b6)
- a67683eece004931bf45bba27ca1e8fc6947029070155e9924bfaf1494fb09c8 (588c83e8b95b8ce8f90ef7efd3fa362a8548cc23)
- 1f894e5e2276fee462f3da54dd44ffc789bda5d8ca578813f57532d93a914af8 (51bb2b105117f2eb9ca863b81af16a29e3888b43)
- e2f6ddbd3ea40b62d4906898b7f98da97be238fe914f41255dc114e2dd19b66b (3fae9aee7abd1fe95463427b3b78043bfcc79c60)
- db4793720fc115fbd12b222a721bd8858f2bc82066ed0362ae230ee5e550222f (51e854db3021ac0c6867613eb57d3cec360c4380)
- 511e6e3ce16bfd58929a106dc5c332c3939c3e9c20f095510de0240496aa0ef0 (55843a460afa7bb7d97ff4bb6e0cf99cc3c08966)
- 4dcdbcba887dc20544b7d0302992b42bf8622ef0a52aba2b6722712e6b41a719 (01b4ca1f68b1b160146b584de4f809acc6003344)
- 48301450a17e467b02d12a71fdd811af6b6802e7c744bc444a427caf81bfab6c (0fc0f1305d40d9caf3a010d55e46c48d0a6cb4d6)
- 2db7936bd367b61ceedce20f6ac5c574296be209789d4d974afae75a588e6ccb (6d3425d124ab7200daa9e6f0a4b4ed557d5a3bf2)
- a584e1c0177f01d08a02efb2d8ab02ee0fdfe6e0e1a4166ebdb0f4c87b5d9487 (b3998f6b2480d074b64bda533b59566a8c0b7c5b)
- 7014b1d6e09be24d30b8f1b2f4022537a201c18a177685712c460ab7b69027f0 (c0017723c66d2564866b8536f36d000b580fa981)

.gitattributes CHANGED
@@ -33,3 +33,20 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ gemma-2-2b-it-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
37
+ gemma-2-2b-it.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ gemma-2-2b-it.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
39
+ gemma-2-2b-it.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
40
+ gemma-2-2b-it.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
41
+ gemma-2-2b-it.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
42
+ gemma-2-2b-it.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
43
+ gemma-2-2b-it.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
44
+ gemma-2-2b-it.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ gemma-2-2b-it.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ gemma-2-2b-it.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
47
+ gemma-2-2b-it.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
48
+ gemma-2-2b-it.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
49
+ gemma-2-2b-it.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
50
+ gemma-2-2b-it.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
51
+ gemma-2-2b-it.fp16.gguf filter=lfs diff=lfs merge=lfs -text
52
+ gemma-2-2b-it.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: gemma-2-2b-it-GGUF
14
+ base_model: google/gemma-2-2b-it
15
+ inference: false
16
+ model_creator: google
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/gemma-2-2b-it-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2-2b-it-GGUF)
21
+ - Model creator: [google](https://huggingface.co/google)
22
+ - Original model: [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/gemma-2-2b-it-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2-2b-it-GGUF) contains GGUF format model files for [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
gemma-2-2b-it-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0b75294e845fa2f48bf20cc552afa00007ad7dd17b339aec10e1e7eaeb45aab
3
+ size 2375548
gemma-2-2b-it.IQ1_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f50b47af0a8fe5445786e99650091d592426bd4a5f7bf0cbddde45585e570fae
3
+ size 873797472
gemma-2-2b-it.IQ1_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8518180ffebec3b94e1499e482a993fcbc336d4d0f11a0184ff07b2e4155881
3
+ size 832159584
gemma-2-2b-it.IQ2_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:306ff7579b44c076431207a7ab1de450d613e40eac2dd86a2d070ba997541272
3
+ size 1002544992
gemma-2-2b-it.IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7f268d1cc4f8cd80839adef4d44785740fb7c7717a1f2c7a742c9afad233c10
3
+ size 1314211680
gemma-2-2b-it.IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c98c9ef365ea21deae4c6b4e5eee0935fa0631c2705759dc40c577cd993807f
3
+ size 1566250848
gemma-2-2b-it.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bbd710a998545f4fb6be925dae2579b3889c69480f9f32e6ed20a5fa7065c18
3
+ size 1229829984
gemma-2-2b-it.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71dcbf085f8ea4f4fc0c480ba41ff2bf6375758314aa1ca8d64899edfee43fae
3
+ size 1550436192
gemma-2-2b-it.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0993ebb8ba692d123ec12dbd82c3bed9bcae69297326bf7177dd34c3cafd1d88
3
+ size 1461667680
gemma-2-2b-it.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b39c54d3ece929dc1fe6376a8a4d31d561fa4af081fa38cd70165b68a90c9602
3
+ size 1360660320
gemma-2-2b-it.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:813642fb2b104172dc12e63fffa044985a1f6f650ba1bfdae480c4173f8bd8c8
3
+ size 1708582752
gemma-2-2b-it.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb9f9e8cb62dc1d888b9a3a6eb2f5168e40f6a8be382bc17c2100d541da6b75f
3
+ size 1638651744
gemma-2-2b-it.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bf91ff1500811699f97f31c76d1fab250f01262fc6af9e79fde1222deb2bf96
3
+ size 1923278688
gemma-2-2b-it.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9176bfad056ed00ef9198da62226057ac2b9649fd98ccf3b4de6c1735845c72
3
+ size 1882543968
gemma-2-2b-it.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15c159cdcaee8914c26362205a8ee31fa11b5dc01eff2b91bd859c4ad811b435
3
+ size 2151393120
gemma-2-2b-it.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c17900fd734960cb4400342ca8fef1bded754eef308fac22dfe298f7c644de8c
3
+ size 2784495456
gemma-2-2b-it.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3ecb284df1ab1f6735b5b03f2655d9120e614bc7d237cfc6ffa7da3f18f08ea
3
+ size 5235213952