--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: augmxnt/shisa-base-7b-v1 datasets: - NilanE/ParallelFiction-Ja_En-100k - mpasila/ParallelFiction-Ja_En-100k-alpaca library_name: peft --- This was trained on the updated version of the previous dataset, and it appears to have improved it a little bit. Further training seems to reduce accuracy of the model. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k). ### Prompt format: Alpaca ``` Below is a translation task. Write a response with the translation. ### Instruction: {} ### Response: {} ``` # Uploaded model - **Developed by:** mpasila - **License:** apache-2.0 - **Finetuned from model :** augmxnt/shisa-base-7b-v1 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)