text
stringlengths
0
1.94k
Anonymous 02/23/24 22:45 No.50512
>>50463
>He doesn't know
---
Anonymous 02/23/24 22:48 No.50544
>>50412
>8 months ago
The current llama.cpp codebase is so unbelievably different that it was likely a bug for a short period of time that eventually got fixed. I'm not sold without actual evidence tbqh.
---
Anonymous 02/23/24 22:48 No.50548
>>50325
Don't get your hopes up too much. If it won't generate them boobies it ain't gonna be good for anything.
>>50463
Eat my shorts
---
Anonymous 02/23/24 22:49 No.50550
Does anyone have that paper about bribing AI models to get better results.
---
Anonymous 02/23/24 22:49 No.50554
>>50354
Damn crazy to think I'm getting 40 t/s and no prompt processing time for the same money ($1500 for 2x3090s)
---
Anonymous 02/23/24 22:49 No.50559
>>50544
(i say this because, in theory, the batches rely on the same prior kv cache regardless of how small those batches are)
---
Anonymous 02/23/24 22:50 No.50570
>>50544
>>50559
>The current llama.cpp codebase is so unbelievably different that it was likely a bug for a short period of time that eventually got fixed.
the github issue is only 2 months old and I'm still having this "problem" (which isn't supposed to be one because batch size does indeed change the logits)
---
Anonymous 02/23/24 22:51 No.50587
>>50548
let's hope it'll go the SDXL path and people will be able to uncuck it
---
Anonymous 02/23/24 22:54 No.50606
>>50570
>(which isn't supposed to be one because batch size does indeed change the logits)
It does change the logits, but you still have to provide meaningful evidence that it makes the model better or worse beyond a single example of where it was 0.5% different. Exllama2 is way worse when it comes to this in terms of non-determinism and people say exllama is great.
---
Anonymous 02/23/24 22:56 No.50623
>model tribalism
>loader tribalism
It never ends.
---
Anonymous 02/23/24 22:56 No.50627
>>50606
try it by yourself, you'll see