File size: 2,589 Bytes
2bf59c5 5de56cf fabd0b6 ff93b42 fabd0b6 e094381 fabd0b6 f7ec9af fabd0b6 e094381 fabd0b6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---
Model Card for Loquace-70m
# 🇮🇹 Loquace-70m 🇮🇹
An exclusively Italian speaking, instruction finetuned, Large Language model. 🇮🇹
The Loquace Italian LLM models family was created as a proof-of-concept to evaluate on how different model sizes can be fine-tuned using QLoRa on an instruct dataset
of a specific language.
## Model Description
Loquace-70m is the smallest model of the Loquace family. It was trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian.
The related code can be found at: https://github.com/cosimoiaia/Loquace
Loquace-70m is part of the big Loquace family:
https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B, the most performing model of it's class.
https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B
## Usage
```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
tokenizer = LLaMATokenizer.from_pretrained("cosimoiaia/Loquace-70m")
model = LLaMAForCausalLM.from_pretrained(
"cosimoiaia/Loquace-70m",
load_in_8bit=True,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_has_fp16_weight=False
)
)
```
## Training
Loquace-70m was trained on a conversational dataset comprising 102k question/answer pairs in Italian language.
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 10000 iterations and took 6 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)
## Limitations
- Loquace-70m may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.
## Dependencies
- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa
|