Edit model card

TaCo-Sanskrit-33B ๐ŸŒฎ

Description

This repo contains the TaCo Sanskrit 33B model LoRA adapter.

Motivated by the theory of parameter-efficient fine-tuning using LoRA and the Chain of Thought (Wei 2022) process, we propose a new method called TaCo. This method uses translation in the Chain of Thought process to create a multilingual model. In this work, we have used the Chain of Thought process to teach language models to translate the instruction to English first, generate the required response in English, and then translate it back to low-resource languages. For training, we employed the curriculum learning strategy. This strategy utilizes the fine-tuned Guanaco-33B model first and then applies instruction tuning using the TaCo method.

The datasets used to train this model are available at saillab/taco-datasets.

โš ๏ธ The TaCo model has not been tested for toxicity and harmful response generation. It is purely intended for research and academic purposes only.

License and Intended Use The TaCo adapter weights are trained on top of the Guanaco-33B (timdettmers/guanaco-33b-merged) model, which is based on the LLaMA model. We used the Alpaca-52K and Dolly-15K datasets and translated them using Google Cloud Translate. We advise you to look into the licensing of Guanaco-33B and the LLaMA model, as well as the terms of usage for Google Cloud Translation, before using this model.

Downloads last month
0
Unable to determine this model's library. Check the docs .

Dataset used to train saillab/taco-sanskrit-33b