Can we get Meta-Llama-3.1-8B-Instruct-ablitrated GGUF?
I was wondering if you could make this in GGUF form? Thanks.
looks like he is making GGUFs right now. I am waiting as well.
Yes they're coming https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF/tree/main
thank you. downloaded it and using it in Oobabooga. Works like a charm. merci
thank you. downloaded it and using it in Oobabooga. Works like a charm. merci
are you sure? Oobabooga compared to simple llama.cpp that you run in cmd, gives very different results. I'm asking all my usual reasoning questions, and in oobabooga it fails a lot complared to llama.cpp. I know it's probably because of the settings like "instruction template", but i have no idea how to fix this in Ooba. In standard llama.cpp this model performs much better. Here are the setting:
llama-cli -m Meta-Llama-3.1-8B-Instruct-abliterated.q5_k.gguf -p "<|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -c 500 -ngl 3 -c 4096 --conversation --multiline-input --color --temp 0.1
Dear Mlabbone,
Have Meta fixed the method abliterated in the new version of llama-3.1?
I have uploaded your model mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated\nand this model responds to many questions with "sorry, I cannot help you with this," while the model mlabonne/Meta-Llama-3-8B-Instruct-abliterated answers these questions as expected
@Workermen Can you send me the prompt you used? It worked in my tests
@Workermen Can you send me the prompt you used? It worked in my tests
You didn't forget anything, the suggestion I linked was somebody trying to get it to work in a broken way. I only thought it worked because I'd been staring at my screen for too long. After taking a break and getting some sleep (which I should have done earlier :) and starting clean I have things working a lot better.