Doctor Shotgun
Doctor-Shotgun
AI & ML interests
ML hobbyist with an interest in open source generative AI
Recent Activity
New activity
2 days ago
allenai/Llama-3.1-Tulu-3-70B:Reason behind not using special tokens in the prompt format?
updated
a model
16 days ago
Doctor-Shotgun/SDXL-PD6-Character-Lora-1024-Compass
updated
a collection
16 days ago
Stable Diffusion Things
Organizations
Doctor-Shotgun's activity
Reason behind not using special tokens in the prompt format?
1
#2 opened 2 days ago
by
Doctor-Shotgun
Can't Generate Output
3
#2 opened about 1 month ago
by
Karras10
Maybe a bug with magnum-v4-72b (gguf, magnum-v4-72b.i1-Q4_K_S)
2
#4 opened about 1 month ago
by
fsmedberg
Impressive as always. Qwen2.5 32B?
6
#1 opened about 1 month ago
by
MRGRD56
Hi, An unusual problem.
2
#3 opened about 1 month ago
by
Puchacz19
settings?
3
#5 opened about 2 months ago
by
Fabian93
Odd tokens output by 3.0bpw quant.
15
#1 opened 3 months ago
by
Feorn
Update LICENSE
#2 opened 3 months ago
by
Doctor-Shotgun
Adding `safetensors` variant of this model
#2 opened 8 months ago
by
SFconvertbot
Higher perplexity than Meta-Llama-3-70B-Instruct? Meta-Llama-3-8B-Instruct-abliterated was lower.
2
#1 opened 6 months ago
by
matatonic
Plans for doing the 70B as well?
1
#2 opened 7 months ago
by
Doctor-Shotgun
Method used to extend the context?
7
#1 opened 7 months ago
by
Doctor-Shotgun
Produces garbage
3
#1 opened 7 months ago
by
catid
Correct LICENSE, see details
#3 opened 9 months ago
by
elinas
License?
1
#1 opened 10 months ago
by
Kquant03
Recommended Chat Setup?
1
#2 opened 10 months ago
by
orick96
Interesting fate of my quant
9
#1 opened 10 months ago
by
mishima
Error
9
#1 opened 11 months ago
by
streamerbtw1002
Fix config.json
3
#6 opened 10 months ago
by
Doctor-Shotgun