Request: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored

#166
by aifeifei798 - opened

Model name:
aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored

Model link:
https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored

Brief description:
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.

Saving money(LLama 3.1)
only test en.
Input Models input text only. Output Models generate text and code only.
Uncensored
Quick response
A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
DarkIdol:Roles that you can imagine and those that you cannot imagine.
Roleplay
Specialized in various role-playing scenarios
An image/direct image link to represent the model (square shaped):
image/png

Queued!

mradermacher changed discussion status to closed

Ah, but note that there are ongoing issues with llama-3.1 (pretokenizer, rope scaling), which will likely affect the outcome badly. It might be better to wait for them to be sorted out. If I don't forget, I will redo the quant once issues seem to be sorted out.

It's essential to test the new version to fully uncover any issues. This time, the changes I've made are not significant, so I'll release it first and see what feedback comes in.

Another key dynamic is that once an unrestricted model is released by someone, it sets a precedent that encourages more people to release their own versions, often resulting in better quality, faster iterations, and more prompt fixes. :)))

sure, i am not worried about having to redo 8b models. i am worried about having to redo 405b models. sigh.

Don't worry, there will always be someone who can step up :P

that's what i thought in february, but in march, i realised i had to do quants myself if i wanted them.

It's tough, the hardware limitations are too severe. Running the 405B model is already a challenge, let alone training it. This level of model is beyond what an individual can realistically support.

unrelated, before I forget: meta has fixed their tokenizer config, while your model still uses the old one

Let's wait for Meta to stabilize their updates. Today, I discovered many issues with the current version, including usage, training, and data-related problems. It seems that these issues can only be resolved by updating to version 1.1.

if only people would test their models before announcing them...

The comprehensive testing of the model is too challenging, so I tried using a new tokenizer configuration, and then my version 1.0 turned into a fool (laughs); better stick with the old one :P

Sign up or log in to comment