Spaces:
Running
on
A10G
Running
on
A10G
Accessing own private repos
2
#141 opened 17 days ago
by
themex1380
Why can't i login?
3
#139 opened 21 days ago
by
safe049
If generating model cards readmes, consider adding support for these extra authorship parameters
2
#137 opened 27 days ago
by
mofosyne
Add F16 and BF16 quantization
1
#129 opened about 2 months ago
by
andito
update readme for card generation
4
#128 opened 2 months ago
by
ariG23498
[bug] asymmetric t5 models fail to quantize
#126 opened 3 months ago
by
pszemraj
[Bug] Extra files with related name were uploaded to the resulting repository
#125 opened 3 months ago
by
Felladrin
Issue converting PEFT LoRA fine tuned model to GGUF
2
#124 opened 3 months ago
by
AdnanRiaz107
Issue converting nvidia/NV-Embed-v2 to GGUF
#123 opened 3 months ago
by
redshiva
Issue converting FLUX.1-dev model to GGUF format
3
#122 opened 3 months ago
by
cbrescia
Add Llama 3.1 license
#121 opened 3 months ago
by
jxtngx
Add an option to put all quantization variants in the same repo
#120 opened 3 months ago
by
A2va
Phi-3.5-MoE-instruct
6
#117 opened 4 months ago
by
goodasdgood
Fails to quntize T5 (xl and xxl) models
1
#116 opened 4 months ago
by
girishponkiya
Arm optimized quants
1
#113 opened 4 months ago
by
SaisExperiments
DeepseekForCausalLM is not supported
1
#112 opened 4 months ago
by
nanowell
Please, update converting script. Llama.cpp added support for Nemotron and Minitron architectures.
3
#111 opened 4 months ago
by
NikolayKozloff
Enable the created name repo to be without the quantization type
#110 opened 4 months ago
by
A2va
I think I broke the space quantizing 4bit modle with Q4L
#106 opened 5 months ago
by
hellork
Authorship Metadata support added to converter script, you may want to add the ability to add metadata overrides
3
#104 opened 5 months ago
by
mofosyne
Please support this method:
7
#96 opened 6 months ago
by
ZeroWw
Support Q2 imatrix quants
#95 opened 6 months ago
by
Dampfinchen
Maybe impose a max model size?
3
#33 opened 9 months ago
by
pcuenq