Yamam

Herman555

AI & ML interests

None yet

Recent Activity

Organizations

None yet

Herman555's activity

New activity in Sao10K/Llama-3.1-8B-Stheno-v3.4 4 months ago

Brief feedback.

6
#2 opened 4 months ago by
Herman555
reacted to Undi95's post with ❤️ 5 months ago
view post
Post
11893
Hello there,

New model released, my goal was to try finetune on the last Llama-3.1-8B-Instruct but not a small train, I wanted to do something useful.
One of the rare model that I didn't made for RP, or in the goal to uncensor it (but I did anyway kek).

The model was trained on 9M Claude conversations ONLY, giving him another writting style.

Undi95/Meta-Llama-3.1-8B-Claude > OG release fp32, it's the epoch 2
Undi95/Meta-Llama-3.1-8B-Claude-bf16 > Base model resharded in bf16 waiting for available quant without issues

Since it's frustrating to be censored using a local model, orthogonal activation steering was used, trying to force the model to never refuse a prompt.

Undi95/Meta-Llama-3.1-8B-Claude-68fail-3000total > Uncensored model, refuse 68 times on 3000 toxic prompt
Undi95/Meta-Llama-3.1-8B-Claude-39fail-3000total > Uncensored model, refuse 39 times on 3000 toxic prompt

It still refuse some prompt but the majority of them is uncensored. OAS can make a model more dumb or make the base perplexity go higher, so I didn't snipe for 0 refusal.

I don't do non-RP model a lot so any feedback is welcome, I would like to re-use this base for some others future project if needed.
·
New activity in Sao10K/L3-8B-Stheno-v3.2 7 months ago

Feedback

14
#1 opened 7 months ago by
TravelingMan
replied to Lewdiculous's post 8 months ago
view reply

Yes, that's the one. Thank you so much!. You're genuinely awesome in every way.

replied to Lewdiculous's post 8 months ago
view reply

Can you also do the same for TheSpice model?, I still find it to be the best for roleplay currently.

reacted to Lewdiculous's post with ❤️👍 8 months ago
view post
Post
43907
Updated: Lumimaid and TheSpice-v0.8.3

I have uploaded version 2 (v2) files for the Llama-3-Lumimaid-8B-v0.1-OAS GGUF Imatrix quants.

[model] Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix

You can recognize the new files by their v2 prefix.

Imatrix data was generated from the FP16 and conversions directly from the BF16.
Hopefully avoiding any losses in the model conversion, as has been the recently discussed topic on Llama-3 and GGUF lately.

This is more disk and compute intensive so lets hope we get GPU inference support for BF16 models in llama.cpp.

If you are able to test them and noticed any issues compared to the original quants, let me know in the corresponding discussions.

---

Additionally, L3-TheSpice-8b-v0.8.3 GGUF Imatrix quants were also updated.

[model] Lewdiculous/L3-TheSpice-8b-v0.8.3-GGUF-IQ-Imatrix
·
New activity in Virt-io/SillyTavern-Presets 8 months ago
New activity in cgato/L3-TheSpice-8b-v0.1.3 8 months ago

Amazing at roleplay

5
#1 opened 8 months ago by
AliceThirty

Seems to be broken.

5
#1 opened 8 months ago by
Herman555
New activity in Undi95/Llama-3-LewdPlay-8B-evo-GGUF 8 months ago

Q5_K_M

2
#2 opened 8 months ago by
Herman555
New activity in lemonilia/ShoriRP-v0.75d 8 months ago

Llama 3 is out!

2
#6 opened 8 months ago by
Herman555

Prompt format.

3
#2 opened 9 months ago by
Herman555
New activity in lemonilia/ShoriRP-v0.75d 10 months ago

Are we 75% there?

4
#5 opened 10 months ago by
Lewdiculous

Great model.

11
#2 opened 10 months ago by
Herman555