FinetuneRT-Colab / README.md
prithivMLmods's picture
Update README.md
3edfc92 verified
metadata
license: creativeml-openrail-m
language:
  - en
pretty_name: f

RT Finetuning Scripts

⚠️Clear Notebook Before Use

This repository contains the training and fine-tuning scripts for the following models and adapters:

  • Llama
  • Qwen
  • SmolLM
  • DeepSeek
  • Other Adapters

Overview

These scripts are designed to help you fine-tune various language models and adapters, making it easy to train or adapt models to new datasets and tasks. Whether you want to improve a model’s performance or specialize it for a specific domain, these scripts will facilitate the process.

Features

  • Training Scripts: Easily train models on your own dataset.
  • Fine-Tuning Scripts: Fine-tune pre-trained models with minimal setup.
  • Support for Multiple Models: The scripts support a variety of models including Llama, Qwen, SmolLM, and DeepSeek.
  • Adapter Support: Fine-tune adapters for flexible deployment and specialization.

Requirements

Before running the scripts, make sure you have the following dependencies:

  • Python 3.x
  • transformers library
  • torch (CUDA for GPU acceleration)
  • Additional dependencies (see requirements.txt)

Installation

Clone the repository and install dependencies:

git clone https://github.com/your-repo/rt-finetuning-scripts.git
cd rt-finetuning-scripts
pip install -r requirements.txt

Usage

Fine-Tuning a Model

  1. Choose a model: Select from Llama, Qwen, SmolLM, or DeepSeek.
  2. Prepare your dataset: Ensure your dataset is formatted correctly for fine-tuning.
  3. Run the fine-tuning script: Execute the script for your chosen model.

Contributing

Contributions are welcome! If you have improvements or bug fixes, feel free to submit a pull request.