hanami

Llama-3.1-70B-Hanami-x1


This is an experiment over Euryale v2.2, which I think worked out nicely.

Feels different from it, in a good way. I prefer it over 2.2, and 2.1 from testing.

As usual, the Euryale v2.1 & 2.2 Settings work on it.

min_p of at minimum 0.1 is recommended for Llama 3 types.

I like it, so try it out?

Downloads last month
660
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Sao10K/L3.1-70B-Hanami-x1

Finetunes
1 model
Merges
3 models
Quantizations
4 models