File size: 2,897 Bytes
3ca3bfe
 
 
 
 
 
 
ce8dac3
 
 
 
 
 
 
3ca3bfe
 
 
 
ce8dac3
3ca3bfe
ce8dac3
3ca3bfe
23d920e
 
 
09b1fd3
 
 
3ca3bfe
 
ce8dac3
387ae7a
3ca3bfe
ce8dac3
 
3ca3bfe
 
ce8dac3
3ca3bfe
ce8dac3
 
 
 
 
3ca3bfe
ce8dac3
3ca3bfe
 
ce8dac3
f3ebc5f
 
 
 
 
 
 
 
ce8dac3
 
 
e8e9e08
 
 
 
ce8dac3
 
3ca3bfe
 
ce8dac3
3ca3bfe
ce8dac3
 
23d920e
3ca3bfe
 
 
 
 
 
 
 
 
 
 
ce8dac3
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---

Model Card for Loquace-7B

# 🇮🇹 Loquace-7B 🇮🇹 

An exclusively Italian speaking, instruction finetuned, Large Language model. 🇮🇹

The Loquace Italian LLM models family was created as a proof-of-concept to evaluate on how different model sizes can be fine-tuned using QLoRa on an instruct dataset
of a specific language.

The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available,
this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy.

## Model Description

Loquace-7B is the first 7B italian Large Language Model trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian and that uses Falcon-7B model as base.

The related code can be found at:
https://github.com/cosimoiaia/Loquace


Loquace-7B is part of the big Loquace family:

https://huggingface.co/cosimoiaia/Loquace-70m   -   Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m  -   Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B    -   Based on Falcon-7B
https://huggingface.co/cosimoiaia/Loquace-12B   -   Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B   -   Based on gpt-neox-20B

## Usage


```python
from transformers import (
    AutoTokenizer,
    AutoModelForCausalLM,
    BitsAndBytesConfig
)

tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-7B", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    "cosimoiaia/Loquace-7B",
    load_in_8bit=True,
    device_map="auto",
    quantization_config=BitsAndBytesConfig(
      load_in_4bit=True,
      llm_int8_has_fp16_weight=False
    )
)
```


## Training

Loquace-7B was trained on a conversational dataset comprising 102k question/answer pairs in Italian language. 
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 3000 iterations and took 16 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)

## Limitations

- Loquace-7B may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.

## Dependencies

- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa