Edit model card


Ligma Is "Great" for Model Alignment

WARNING: This model is published for scientific purposes only. It may and most likely will produce toxic content.

Trained on the rejected column of Anthropic's hh-rlhf dataset.

Use at your own risk.

Example Outputs:


License: just comply with llama2 license and you should be ok.

Downloads last month

Adapter for

Dataset used to train kubernetes-bad/Ligma-L2-13b