Lance
clevnumb
AI & ML interests
None yet
Recent Activity
new activity
about 2 months ago
unsloth/DeepSeek-R1-Distill-Qwen-7B-GGUF:Error loading on lm-studio
new activity
3 months ago
TheDrummer/Cydonia-22B-v1.3:What context size when using a 24GB VRAM card (4090) is best?
Organizations
None yet
clevnumb's activity
How do you circumvent the censorship nosnense? (red heart generated instead)
#4 opened 19 days ago
by
clevnumb
Error loading on lm-studio
4
#1 opened 3 months ago
by
victor-Des
What context size when using a 24GB VRAM card (4090) is best?
3
#1 opened 4 months ago
by
clevnumb
Which Quant of this model will fit in VRAM entirely on a single 24GB video card (4090)
3
#1 opened 8 months ago
by
clevnumb
My alternate quantizations.
5
#3 opened 9 months ago
by
ZeroWw
Not loading in Latest Tabby (with SillyTavern) - ERROR
2
#2 opened 8 months ago
by
clevnumb
what quant should I use to use this with a single 24GB video card (PC) (4090 card)?
1
#2 opened 9 months ago
by
clevnumb
which quaint to I use to fit on a single 24GB video card on a PC Running Windows 11? (4090)
3
#3 opened 9 months ago
by
clevnumb
Single 4090 using OogaBooga? (Windows 11, 96GB of RAM)
1
#1 opened about 1 year ago
by
clevnumb
How do I load this in OogaBooga? (text-generation-webui)
#1 opened 12 months ago
by
clevnumb
Will any 120b model currently fit on a single 24GB VRAM card through any app I can run on PC? (aka 4090)
15
#1 opened about 1 year ago
by
clevnumb
Are there safetensor files for the models?
7
#37 opened about 1 year ago
by
wonderflex
Will this fit on a single 24GB Video card (4090)?
1
#2 opened about 1 year ago
by
clevnumb
Any way to speed up generation on a Windows 11 PC, using a single 24GB card (4090), with Text-Generation-WebUI
2
#2 opened about 1 year ago
by
clevnumb
Any chance anyone is quantizing this into a 2.4bpw EXL2 version for those of us with a single 24GB video cards?
1
#30 opened about 1 year ago
by
clevnumb
Glacially slow on a RTX 4090??
5
#1 opened about 1 year ago
by
clevnumb
Which of these 34B model BPW will fit on a single 24GB card's (4090) VRAM?
9
#1 opened over 1 year ago
by
clevnumb
RTX 4090 using Text-Generation-WebUI / Oogabooga FAILS to load this model with Exllama2 (or any method?)
11
#1 opened over 1 year ago
by
clevnumb