Grimulkan
grimulkan
AI & ML interests
None yet
Recent Activity
updated
a model
about 1 month ago
grimulkan/Llama-3.2-11B-Vision-Instruct-Hermes-3-lorablated
updated
a model
about 1 month ago
grimulkan/Llama-3.2-90B-Vision-Hermes-3-lorablated-merge
New activity
about 1 month ago
grimulkan/aurelian-v0.5-70b-rope8-32K-2.4bpw_h6_exl2
Organizations
None yet
grimulkan's activity
10k you say...
1
#1 opened about 1 month ago
by
Doomed1986
max_position_embedding
1
#5 opened 7 months ago
by
the-hir0
EXL quantization at 4.0bpw?
3
#1 opened 9 months ago
by
lazydog22
Further fine-tuning?
2
#4 opened 9 months ago
by
jspr
Comparison with 0.1
13
#3 opened 10 months ago
by
ChuckMcSneed
Benchmarks!
3
#2 opened 10 months ago
by
ChuckMcSneed
Arch speculations
6
#3 opened 10 months ago
by
grimulkan
More quants
1
#2 opened 10 months ago
by
aikitoria
Observations+benchmarks
1
#1 opened 10 months ago
by
ChuckMcSneed
Merging advice?
16
#2 opened 10 months ago
by
sophosympatheia
2 bits GGUF SOTA quants?
31
#2 opened 10 months ago
by
Nexesenex
Quantization calibration dataset
2
#1 opened 10 months ago
by
Amajiro
Devolution into semi-nonsensical long words
6
#1 opened 10 months ago
by
jspr
Feedback and collaboration.
37
#1 opened over 1 year ago
by
Squish42
You made gold!
3
#1 opened 11 months ago
by
ChuckMcSneed
Repetition issue
1
#1 opened 12 months ago
by
lazyDataScientist
GGUF parameters suggestion
1
#1 opened 11 months ago
by
lazyDataScientist
Space after [/INST]
7
#2 opened about 1 year ago
by
Satya93
Space after [/INST]
7
#2 opened about 1 year ago
by
Satya93