hanami

Llama-3.1-70B-Hanami-x1


This is an experiment over Euryale v2.2, which I think worked out nicely.

Feels different from it, in a good way. I prefer it over 2.2, and 2.1 from testing.

As usual, the Euryale v2.1 & 2.2 Settings work on it.

min_p of at minimum 0.1 is recommended for Llama 3 types.

I like it, so try it out?

Downloads last month
240
Safetensors
Model size
70.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Sao10K/L3.1-70B-Hanami-x1

Adapters
1 model
Finetunes
1 model
Merges
96 models
Quantizations
4 models