Bartowski PRO
bartowski
AI & ML interests
None yet
Organizations
bartowski's activity
Having bad results, how should i use this model?
4
#5 opened 1 day ago
by
RamoreRemora
Interview request: genAI evaluation & documentation
3
#14 opened 4 days ago
by
meggymuggy
Low quants don't seem to work (no <reflection> tags)
6
#3 opened 2 days ago
by
MrHillsss
So, is this based on OG Llama 3 or Llama 3.1?
2
#2 opened 2 days ago
by
XelotX
The original model was updated to fix a bug. Is this repo using the updated version?
2
#4 opened 2 days ago
by
RamoreRemora
does metadata has proper prompt ?
1
#1 opened 2 days ago
by
gopi87
GGUF quantized versions?
8
#4 opened 3 days ago
by
markne
Update config.json
16
#6 opened 3 days ago
by
bullerwins
Bartowski! Let's see how your imatrix differs from mine. 😋
5
#2 opened 3 days ago
by
Joseph717171
Missing <|im_start|> from tokenizer_config.json
3
#3 opened 4 days ago
by
bartowski
Add <|im_start|> as a special token to tokenizer_config.json
3
#4 opened 4 days ago
by
bartowski
llama.cpp CPU backend crashes
1
#1 opened 4 days ago
by
mtasic85
Tiger-Gemma-9B-v2
1
#2 opened 4 days ago
by
EloyOn
ARM speedup.
2
#1 opened 4 days ago
by
Midgardsormr
Quantization help?
2
#4 opened 5 days ago
by
Daemontatox
GGUF Generation Script
3
#3 opened 9 days ago
by
RonanMcGovern
Q2_K_L vs IQ3_M shows favorable results for Q2_K_L
1
#1 opened 5 days ago
by
anxcat
The refusal rate is very high, it kind of look like the original censored model
4
#1 opened 10 days ago
by
sneedingface
How do I GGUF?
2
#1 opened 7 days ago
by
TheDrummer
Q4_0_4_4
9
#2 opened 13 days ago
by
Yuma42
Requant request
1
#1 opened 8 days ago
by
TheDrummer
Lookin for quants
5
#1 opened 9 days ago
by
lemon07r
Where to get the matching tokenizer.
1
#12 opened 9 days ago
by
CharlaDev
USER_TOKEN and START_OF_TURN_TOKEN marked not-special?
3
#2 opened 9 days ago
by
bartowski
Quantization re-hosting
1
#1 opened 9 days ago
by
bartowski
config.json is missing
7
#6 opened 11 days ago
by
PierreCarceller
Upload Mistral-Nemo-Instruct-2407-Q4_0.gguf
6
#5 opened 19 days ago
by
venketh
New safetensors, new model?
2
#1 opened 12 days ago
by
bartowski
Missing Tensors in Q5_K_S + Q5_K_M
12
#8 opened about 1 month ago
by
Digital-At-Work-Christopher
bartowski/Meta-Llama-3.1-8B-Instruct-GGUF does not appear to have a file named config.json.
1
#11 opened 13 days ago
by
apapoutsis
What's it... for?
2
#1 opened 13 days ago
by
rdtfddgrffdgfdghfghdfujgdhgsf
Thanks and questions
9
#3 opened 14 days ago
by
Kukedlc
prompt template is wrong
6
#2 opened 17 days ago
by
mirek190
Ready for quanting?
1
#3 opened 16 days ago
by
bartowski
https://aitorrent.zerroug.de/bartowski-phi-3-5-mini-instruct-gguf-torrent/
#1 opened 18 days ago
by
zerroug
https://aitorrent.zerroug.de/bartowski-replete-coder-v2-llama-3-1-8b-gguf-torrent/
#1 opened 18 days ago
by
zerroug
Model updated
3
#1 opened 18 days ago
by
nina99
Thank you for quants! 🤗
7
#1 opened 3 months ago
by
SicariusSicariiStuff
wrong number of tensors; expected 292, got 291
19
#1 opened about 1 month ago
by
Psykokwak
Function Calling Support?
1
#9 opened 18 days ago
by
poormansblackburne
https://aitorrent.zerroug.de/bartowski-minitron-4b-base-gguf-torrent/
#1 opened 19 days ago
by
zerroug
https://aitorrent.zerroug.de/bartowski-smollm-1-7b-instruct-v0-2-gguf-torrent/
#1 opened 19 days ago
by
zerroug
https://aitorrent.zerroug.de/bartowski-pantheon-rp-1-6-12b-nemo-gguf-torrent/
#1 opened 19 days ago
by
zerroug
https://aitorrent.zerroug.de/bartowski-hermes-3-llama-3-1-70b-gguf-torrent/
1
#1 opened 19 days ago
by
zerroug
https://aitorrent.zerroug.de/bartowski-llama-3-1-storm-8b-gguf-torrent/
#1 opened 19 days ago
by
zerroug
Prompt
1
#1 opened 21 days ago
by
ClosedCharacter
Can't run
2
#1 opened 22 days ago
by
bartowski
which quaint to I use to fit on a single 24GB video card on a PC Running Windows 11? (4090)
3
#3 opened 2 months ago
by
clevnumb
IQ3_M quant busted, IQ3_XXS okay
10
#1 opened about 1 month ago
by
anxcat
Other models updates and ggufs
17
#1 opened 27 days ago
by
Rybens
https://aitorrent.zerroug.de/bartowski-hermes-3-llama-3-1-8b-gguf/
#1 opened 23 days ago
by
zerroug
unable to load model
5
#6 opened 24 days ago
by
lfjmgs
Exl2 quants?
2
#1 opened 25 days ago
by
ElvisM
Whats the prompt template?
4
#2 opened 25 days ago
by
snakeying
Reproducibility
3
#3 opened 26 days ago
by
fedric95
Correct promt format is crucial.
3
#5 opened 26 days ago
by
urtuuuu
Virus/Vulnerabilities
6
#7 opened about 1 month ago
by
frbackup
llama3.1 gguf format
3
#95 opened 28 days ago
by
davidomars
Base_model showing up as finetune
4
#1 opened 30 days ago
by
bartowski