Edit model card

Ligma

Ligma Is "Great" for Model Alignment

WARNING: This model is published for scientific purposes only. It may and most likely will produce toxic content.

Trained on the rejected column of Anthropic's hh-rlhf dataset.

Use at your own risk.

Example Outputs:

Example1

License: just comply with llama2 license and you should be ok.

Downloads last month
3
Unable to determine this model’s pipeline type. Check the docs .

Adapter for

Dataset used to train kubernetes-bad/Ligma-L2-13b