Loquace-70m / README.md
cosimoiaia's picture
Update README.md
2676858
metadata
license: cc-by-nc-2.0
datasets:
  - cosimoiaia/Loquace-102k
language:
  - it
pipeline_tag: conversational
tags:
  - alpaca
  - llama
  - llm
  - finetune
  - Italian
  - qlora

Model Card for Loquace-70m

๐Ÿ‡ฎ๐Ÿ‡น Loquace-70m ๐Ÿ‡ฎ๐Ÿ‡น

An exclusively Italian speaking, instruction finetuned, Large Language model. ๐Ÿ‡ฎ๐Ÿ‡น

The Loquace Italian LLM models family was created as a proof-of-concept to evaluate on how different model sizes can be fine-tuned using QLoRa on an instruct dataset of a specific language.

Model Description

Loquace-70m is the smallest model of the Loquace family. It was trained using QLoRa on a large dataset of 102k question/answer pairs exclusively in Italian.

The related code can be found at: https://github.com/cosimoiaia/Loquace

Loquace-70m is part of the big Loquace family:

https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B, the most performing model of it's class. https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B

Usage

from transformers import (
    AutoTokenizer,
    AutoModelForCausalLM,
    BitsAndBytesConfig
)

tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-70m", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    "cosimoiaia/Loquace-70m",
    load_in_8bit=True,
    device_map="auto",
    quantization_config=BitsAndBytesConfig(
      load_in_4bit=True,
      llm_int8_has_fp16_weight=False
    )
)

Training

Loquace-70m was trained on a conversational dataset comprising 102k question/answer pairs in Italian language. The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset. The model was trained for only 10000 iterations and took 6 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)

Limitations

  • Loquace-70m may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
  • The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
  • The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.

Dependencies

  • PyTorch
  • Transformers library by Hugging Face
  • Bitsandbites
  • QLoRa