Text Generation
Transformers
GGUF
PyTorch
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
llama
en
dataset:HuggingFaceH4/ultrafeedback_binarized
dataset:allenai/tulu-v2-sft-mixture
arxiv:2305.18290
arxiv:2311.10702
Inference Endpoints
has_space
text-generation-inference
MaziyarPanahi
commited on
Commit
•
0ae1934
1
Parent(s):
aba998f
d4cce098d99731fa3034cfc359e0b135b62a97249b475c5043dfa4d7e122173b
Browse files- .gitattributes +1 -0
- tulu-2-dpo-13b.Q4_K_M.gguf +3 -0
.gitattributes
CHANGED
@@ -37,3 +37,4 @@ tulu-2-dpo-13b.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
|
37 |
tulu-2-dpo-13b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
tulu-2-dpo-13b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
tulu-2-dpo-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
37 |
tulu-2-dpo-13b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
tulu-2-dpo-13b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
tulu-2-dpo-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
tulu-2-dpo-13b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
tulu-2-dpo-13b.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef4d4b25adb5c74648fcf19bbbd0adc753e58add7ce05bda9ec11fc22c7f1a5c
|
3 |
+
size 7865956320
|