Text Generation
GGUF
English
quantized
GGUF
imatrix
quantization
imat
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
legraphista commited on
Commit
2c3c93b
1 Parent(s): f66a5d1

Upload dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_XXS.gguf with huggingface_hub

Browse files
.gitattributes CHANGED
@@ -56,3 +56,4 @@ dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_XXS.gguf filter=lfs diff=lfs merge=lf
56
  dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
57
  dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
58
  dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
 
 
56
  dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
57
  dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
58
  dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
59
+ dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
dolphin-2.9.2-Phi-3-Medium-abliterated.IQ2_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85db22d2ee8f971235a8314516793f9ba7f30b238918ec78366e274574f326f6
3
+ size 3795635648