Samplers? Settings?

#1
by zaq-hack - opened

Any recommendations on samplers or settings? I can't get this thing to output more than ::::::::::::: (forever), and I've tried dozens of ways. Oobabooga, Kobold, even SillyTavern as interfaces - all the same. I've used the given template, but it doesn't seem to care for my setup ...

Really? Thats weird, try the blokes quant gguf on lm studio. that version works fine for me. I reccomend q_6_k

https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GGUF

deleted

Really? Thats weird, try the blokes quant gguf on lm studio. that version works fine for me. I reccomend q_6_k

https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GGUF

While i got some bad code from it during testing ( misspelling of variable names, other minor things, oddly the same exact errors i got from oobas code gen too.. ), the 6K GGUF did 'run' fine.

Ok glad its working. What kind of code are you trying to generate? is your temperature set to the lowest setting? it should be for code gen. Ive never had issues with misspelling variable names with this model myself.

The main thing ive ran into is stop token issues, but besides that the model has worked pretty well for me

deleted

Ok glad its working. What kind of code are you trying to generate? is your temperature set to the lowest setting? it should be for code gen. Ive never had issues with misspelling variable names with this model myself.

Its all python. I'm still in testing, so i still will be playing with parameters and such. I have some simple tests i run on each model when i try them. Such as simple esy GUI buttons, calculating date ranges based on current date, simple bottle interface with addition of numbers, some REST API calls.. simple stuff but each has its unique nuances, and sometimes models miss them.

looks like temp was at 0.7, ill try dropping that first on the next round of testing.

Ok glad its working. What kind of code are you trying to generate? is your temperature set to the lowest setting? it should be for code gen. Ive never had issues with misspelling variable names with this model myself.

Its all python. I'm still in testing, so i still will be playing with parameters and such. I have some simple tests i run on each model when i try them. Such as simple esy GUI buttons, calculating date ranges based on current date, simple bottle interface with addition of numbers, some REST API calls.. simple stuff but each has its unique nuances, and sometimes models miss them.

looks like temp was at 0.7, ill try dropping that first on the next round of testing.

I also can't make this produce more than :::::::::::::: (forever). How did you solve it?
i am using llama.cpp on langchain with model https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GGUF

deleted

I also can't make this produce more than :::::::::::::: (forever). How did you solve it?
i am using llama.cpp on langchain with model https://huggingface.co/TheBloke/Everyone-Coder-33B-Base-GGUF

Myself i didnt have that sort of problem, just the inaccurate code. Tho dropping temp helped a bit. But im using ooba's text-gen, not langchain. so not a 1:1 comparison.

@zaq-hack @Nurb432 @Juanreatrepo77 Im sorry for the issues this model has presented. But i highly recommend checking out the model bellow, its the second version, and it leagues ahead of version 1 without any of the issues you have been facing.

https://huggingface.co/rombodawg/Everyone-Coder-33b-v2-Base

quants:
https://huggingface.co/models?sort=trending&search=lone+striker+Everyone-Coder-33b-v2-Base

And if you want a very high level of performance at a much smaller size I also have this model which i have created

https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

deleted
edited Feb 16

And if you want a very high level of performance at a much smaller size I also have this model which i have created

https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

No, im good, it just wasn't 'perfect' i want having bad issues like the others, just a few minor errors in the code generated, but it worked fine.. and i appreciate your work.

will try that small one too . ( edit: seems it wont be today. having some issues loading it, im sure my transformer install is hosed.. .. need time to sort it out. also was going to try to convert to gguf, but cant build the python bits for lllma.ccp today. one of them days i guess )

EDIT2 : I ran across someone who had converted to a GGUF, and interesting i now get the repeated :::::: as above. Tried several templates. ( but i guess this really belongs over on that models discussion, for the 33 i never saw that result )

Sign up or log in to comment