This is a low-rank adapter for OpneCALM-7B trained on the 134K Japanese dataset.

It doesn't contain the foundation model itself, so it's Apache 2.0 licensed.

You can try it here.
colab notebook

QLoRA finetuning code is here.
colab notebook

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support