Bartowski
bartowski
AI & ML interests
None yet
Organizations
bartowski's activity
Chat template - use with Ollama?
2
#1 opened 1 day ago
by
smcleod
Context size?
2
#1 opened 1 day ago
by
dasChronos1
GGUF and exl2 quants for anyone who wants
3
#2 opened 7 days ago
by
bartowski
OSError: [Errno 36] File name too long
2
#7 opened 3 days ago
by
spoilvoid
Only Hallucinates
1
#3 opened 4 days ago
by
LaughterOnWater
Prompt format
2
#2 opened 4 days ago
by
bartowski
Hi, request to quantify jondurbin/airoboros-dpo-110b-3.3.
3
#1 opened 5 days ago
by
Eilian
why main model of this, removed from HF
2
#3 opened 4 days ago
by
subbur
Model is failing to load in text web UI or Koboldcpp
4
#2 opened 5 days ago
by
jlopez-dl
More quantization variants
7
#1 opened 5 days ago
by
Yuma42
Prompt format correct?
2
#1 opened 6 days ago
by
Yuma42
Prompt Template example
3
#7 opened 21 days ago
by
101rakibulhasan
Which one is better? deepseek-coder-7b-ins-v1.5 or CodeQwen1.5-7B-Chat?
1
#2 opened 10 days ago
by
qwertyjack
This model has the same metadata problem as the BPE fix model
2
#2 opened 6 days ago
by
yehiaserag
Update README.md
1
#12 opened 7 days ago
by
DomsP
Update README.md
1
#1 opened 8 days ago
by
karaketir16
Can you make CodeQwen1.5-7B-Chat IQ4_XS version?
3
#6 opened 8 days ago
by
Dotoro22
Exl quants pls
1
#6 opened 8 days ago
by
rjmehta
non-chat version?
2
#2 opened 8 days ago
by
jacek2024
Why is it showing in LM-Studio as 7B parameters?
5
#2 opened 17 days ago
by
yehiaserag
Anyone able to get this working on koboldcpp?
6
#2 opened 15 days ago
by
lemon07r
Add quant links
#8 opened 10 days ago
by
bartowski
Add exl2 quant link
#3 opened 10 days ago
by
bartowski
You think you could re-quant with the regex fix?
4
#3 opened 15 days ago
by
YearZero
System requirements
2
#1 opened 11 days ago
by
Igor7777
Can't load or quant model
4
#1 opened 24 days ago
by
bartowski
Update: Quantization is Good. Errors were due to something else.
3
#2 opened 13 days ago
by
Phil337
model doesn't stop generating
1
#2 opened 13 days ago
by
mclassHF2023
May require reconversion due to llama.cpp enhancements
8
#1 opened 22 days ago
by
concedo
Bug with GGUF and Llama3
2
#3 opened 14 days ago
by
synbiotik
Question: How to stop making it respond with questions?
3
#2 opened 14 days ago
by
natek000
How to join the Q6 files?
11
#2 opened 14 days ago
by
AIGUYCONTENT
Anyone experiences quality degrade for math question?
3
#4 opened 14 days ago
by
tankstarwar
Is this based on the "Update (5/3)" version?
5
#1 opened 17 days ago
by
Propheticus
Update chat template
2
#1 opened about 1 month ago
by
CISCai
Output ends with </s>
7
#20 opened 17 days ago
by
bartowski
Assistant ends messages with </s> before EOS
6
#1 opened 17 days ago
by
Propheticus
Question
3
#1 opened 17 days ago
by
dillfrescott
New weights available
2
#1 opened 18 days ago
by
michaelfeil
Is eos_token got fixed?
3
#1 opened 18 days ago
by
Starlento
[metadata] Also specify base_model here to display in the hub UI
3
#9 opened 19 days ago
by
julien-c
Tokenizer differences from v1
1
#3 opened 18 days ago
by
bartowski
Can you convert Octopus V4 to GGUF?
1
#1 opened 19 days ago
by
MouhuAI
[metadata] you can specify that license directly like this
1
#10 opened 19 days ago
by
julien-c
NVidia rag fine tune
3
#1 opened 20 days ago
by
KnutJaegersberg
No longer works with current version of llama.cpp
3
#1 opened 19 days ago
by
dzupin
Starcoder Instruct config preset for LM Studio?
2
#1 opened 20 days ago
by
jabancroft
Chat template
15
#5 opened 19 days ago
by
bartowski
Is the Q8_0 quant also imatrix'd? Why?
1
#1 opened 19 days ago
by
igzbar
Hi - are you going add new llama 70b version as well?
4
#1 opened 21 days ago
by
mirek190
Have these quants had their pre-tokenizer fixed?
2
#8 opened 20 days ago
by
smcleod
Have these quants had their pre-tokenizer fixed?
3
#2 opened 20 days ago
by
smcleod
Calibration data
2
#1 opened 20 days ago
by
Apel-sin
Fix The Quantized Files
1
#3 opened 21 days ago
by
yxfmzyup
Out Put of the model
1
#6 opened 22 days ago
by
Akkurt97
Thanks. Did you use all 3 end tokens?
2
#1 opened 22 days ago
by
Phil337
New fixed quants? [BPE support]
2
#5 opened 22 days ago
by
RachidAR
[FEEDBACK] Notifications
66
#6 opened almost 2 years ago
by
victor