Thank you! - Feedback Needed!

#1
by Hastagaras - opened

Thank you for the gguf, this model is just an experiment. I hope someone can give me feedback so I can understand which part I should fix.

Lewdiculous changed discussion title from Thank you! to Thank you! - Feedback Needed!

using "human data and not synthetic data"

That convinced me to at least give it a go.

This shit gets unhinged. It's great but definitely gotta reign it in on occasion. Nice work.

@Hastagaras Now, this is something different. I guess I need to tweak the temp, since she shows a lot of creativity with the parameters I use with the other models without problems, and jumps from one thing to another.

Responses are natural, and she likes to use emojis a lot, when I don't use them. That brought a smile to my face since it was unexpected.

Do you have temp recommendations for this model?

I'm using this sampler from SerialKicked's Repo, using the Q6, maybe you could try something different...like using first person for the first message.

I also use the standard sillytavern llama 3 instruct format, the model is just my merge experiment so I'm not really sure the best settings for the model, thank you for your feedback!

I'm using this sampler from SerialKicked's Repo, using the Q6, maybe you could try something different...like using first person for the first message.

I also use the standard sillytavern llama 3 instruct format, the model is just my merge experiment so I'm not really sure the best settings for the model, thank you for your feedback!

Got it, thank you for sharing! You've got a fun model here.

You guys know how to tempt a man, unhinged humanlike RP? I will be back after I try this, it was time for me to start something new anyways.

I just released the stable version... I think this one has better formatting while hopefully still retaining the blackroot 'human-like' responses: Hastagaras/Anjir-8B-L3. This one is even more 'unhinged' as it doesn't refuse anything.

Ok I am going to try it again, but last night I had a lot of issues. While the responses were very good, the problem was it could not keep up who is who, and more wanted to just write more for me and just do its own thing.

Ok I am going to try it again, but last night I had a lot of issues. While the responses were very good, the problem was it could not keep up who is who, and more wanted to just write more for me and just do its own thing.

Thanks for the heads up. I just downloaded Anjir, I will delete it and download HaluAnjir instead.

edit: Oh, so you changed the base to NeuralDaredevil-8B-abliterated. Let's see how it works out.
edit2: Also, did you try lowering the temp to 0.5? @Revile had incoherencies at normal temps using Halu in a merge: https://huggingface.co/Revile/HaluStheno-Llama3-8b

What's your max token output? I'm using 128 and the anjir seems okay with around 0.8-1.0 temp, I also notice that longer max token output not performed well, could you please tell me what happened with the model? maybe i can try to fix it

I can test the model later (q4km), to see how it performs on my side, but I'm using a phone app (Layla) and I can't see as much info as a Kobold use.

I will look the max token output I have.

Can you set the chat template on Layla?

I only have some presets, now set in:
temp 0.82
Dynamic temp range 0.5
Top P 1
Min P 0.1
Ctx lenght 4096
Chat template is the usual Llama 3 one, changing user and assistant for {{user}} and {{char}}.

deleted image

Try with user and assistant instead of {{user}} and {{char}} between the <|start_header_id|> and <|end_header_id|>, i hope it works..or maybe like this for the input

<|start_header_id|>user<|end_header_id|>

{{user}}:

and this one for the input suffix

<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{{char}}:

sorry i missed click for the open and reopen discussion hehe

Hastagaras changed discussion status to closed
Hastagaras changed discussion status to open
Hastagaras changed discussion status to closed
Hastagaras changed discussion status to open

Try with user and assistant instead of {{user}} and {{char}} between the <|start_header_id|> and <|end_header_id|>, i hope it works..or maybe like this for the input

I think that there is a misunderstanding. I answered Ardvark123's message thinking it was yours.
I didn't have any problems with Anjir, since I still have to try it (love Halu Blackroot btw, and I didn't have any problem with it).

I said that I was probably delete Anjir to download your new merge only because I thought the message that said that it had coherence problems was yours. I'll test it tonight to see how it goes.

Sorry about that, man.

Oh I was not speaking on Anjir but of the Halu one the thread was made for. I am about to try that one out now.

Oh I was not speaking on Anjir but of the Halu one the thread was made for. I am about to try that one out now.

Yeah, it was a misunderstanding. As I said, my bad.

edit2: Also, did you try lowering the temp to 0.5? @Revile had incoherencies at normal temps using Halu in a merge: https://huggingface.co/Revile/HaluStheno-Llama3-8b

@Hastagaras I've since removed the reference to incoherency, since calling it "incoherent" was hyperbolic on my part. I simply found it to be more appealing at a lower temperature. Sorry, I'll be more careful with my wording in the future.

I think the discussion for the Anjir should be at the model repository instead, to reduce confusion πŸ˜…

@Revile It’s okay, I appreciate any feedback because that will help me to improve my models, to understand which part i should improve, so please don’t hold back

Are you all getting a single paragraph reponse? As in, the model doesn't break down the response in several paragraphs, using a huge single one.

Can be easily fixed it you tell her in chat that you want her to use smaller paragraphs. Then she uses several ones.

Sign up or log in to comment