Bartowski
bartowski
AI & ML interests
None yet
Organizations
bartowski's activity
Prompt format
7
#1 opened 2 days ago
by
supportend
"This model also supports the following FIM tokens"
5
#1 opened 1 day ago
by
catarino
pytorch_model files only 33MB?
1
#1 opened 1 day ago
by
bartowski
Update tokenizer to include [SUFFIX] and [PREFIX] tokens?
1
#1 opened 1 day ago
by
ShiveringSpine
Update README.md
2
#1 opened 1 day ago
by
fblgit
Can't load model in LlamaCpp
7
#4 opened 3 days ago
by
ThoilGoyang
exl2 vs GGUF at 8bit
1
#2 opened 3 days ago
by
Samvanity
On some promts, medium is worse than mini&small?
6
#2 opened 8 days ago
by
urtuuuu
core dumps when attempting falcon-11B-Q6_K.gguf
1
#1 opened 4 days ago
by
LaferriereJC
It seems the quants were created before the BPE pre-tokenizer fix?
1
#1 opened 5 days ago
by
skruse
More quantization variants
11
#1 opened 15 days ago
by
Yuma42
Promt format question
2
#1 opened 5 days ago
by
urtuuuu
Chat template
#4 opened 6 days ago
by
bartowski
gguf model
2
#2 opened 7 days ago
by
edwardDali
Seems can not use response_format in llama-cpp-python
1
#3 opened 7 days ago
by
svjack
Why does it generate nothing but garbage?
4
#1 opened 7 days ago
by
newsletter
Another <EOS_TOKEN> issue
1
#2 opened 7 days ago
by
alexcardo
Q1 Model
4
#1 opened 8 days ago
by
neelkalpa
Update README.md
2
#1 opened 8 days ago
by
Joseph717171
I think this is actually just 0.1
3
#1 opened 8 days ago
by
bartowski
no system message?
6
#14 opened 8 days ago
by
mclassHF2023
v3 tokenizer
5
#1 opened 9 days ago
by
ayyylol
Add quant links
#7 opened 9 days ago
by
bartowski
Chat template - use with Ollama?
2
#1 opened 11 days ago
by
smcleod
Context size?
2
#1 opened 11 days ago
by
dasChronos1
GGUF and exl2 quants for anyone who wants
3
#2 opened 17 days ago
by
bartowski
OSError: [Errno 36] File name too long
3
#7 opened 13 days ago
by
spoilvoid
Only Hallucinates
1
#3 opened 14 days ago
by
LaughterOnWater
Prompt format
2
#2 opened 14 days ago
by
bartowski
Hi, request to quantify jondurbin/airoboros-dpo-110b-3.3.
3
#1 opened 15 days ago
by
Eilian
why main model of this, removed from HF
2
#3 opened 14 days ago
by
subbur
Model is failing to load in text web UI or Koboldcpp
4
#2 opened 15 days ago
by
jlopez-dl
Prompt format correct?
2
#1 opened 16 days ago
by
Yuma42
Prompt Template example
3
#7 opened about 1 month ago
by
101rakibulhasan
Which one is better? deepseek-coder-7b-ins-v1.5 or CodeQwen1.5-7B-Chat?
1
#2 opened 20 days ago
by
qwertyjack
This model has the same metadata problem as the BPE fix model
2
#2 opened 16 days ago
by
yehiaserag
Update README.md
1
#12 opened 17 days ago
by
DomsP
Update README.md
1
#1 opened 18 days ago
by
karaketir16
Can you make CodeQwen1.5-7B-Chat IQ4_XS version?
3
#6 opened 18 days ago
by
Dotoro22
Exl quants pls
1
#6 opened 18 days ago
by
rjmehta
non-chat version?
2
#2 opened 18 days ago
by
jacek2024
Why is it showing in LM-Studio as 7B parameters?
5
#2 opened 27 days ago
by
yehiaserag
Anyone able to get this working on koboldcpp?
6
#2 opened 25 days ago
by
lemon07r
Add quant links
#8 opened 20 days ago
by
bartowski
Add exl2 quant link
#3 opened 20 days ago
by
bartowski
You think you could re-quant with the regex fix?
4
#3 opened 25 days ago
by
YearZero
System requirements
2
#1 opened 21 days ago
by
Igor7777
Can't load or quant model
4
#1 opened about 1 month ago
by
bartowski
Update: Quantization is Good. Errors were due to something else.
3
#2 opened 23 days ago
by
Phil337
model doesn't stop generating
1
#2 opened 23 days ago
by
mclassHF2023
May require reconversion due to llama.cpp enhancements
8
#1 opened about 1 month ago
by
concedo
Bug with GGUF and Llama3
2
#3 opened 24 days ago
by
synbiotik
Question: How to stop making it respond with questions?
3
#2 opened 24 days ago
by
natek000
How to join the Q6 files?
11
#2 opened 24 days ago
by
AIGUYCONTENT
Anyone experiences quality degrade for math question?
3
#4 opened 24 days ago
by
tankstarwar
Is this based on the "Update (5/3)" version?
5
#1 opened 27 days ago
by
Propheticus