Loquace-Wizard-13B / README.md
cosimoiaia's picture
Update README.md
8db71b8 verified
---
license: apache-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
tags:
- Italian
- Qlora
- finetuning
- Text Generation
pipeline_tag: text-generation
---
Model Card for Loquace-Wizard-13B
# ๐Ÿ‡ฎ๐Ÿ‡น Loquace-Wizard-13B v0.1 ๐Ÿ‡ฎ๐Ÿ‡น
Loquace is an Italian speaking, instruction finetuned, Large Language model. ๐Ÿ‡ฎ๐Ÿ‡น
Loquace-Wizard-14B's peculiar features:
- The First 13B Specifically finetuned in Italian.
- Is pretty good a following istructions in Italian.
- Responds well to prompt-engineering.
- Works well in a RAG (Retrival Augmented Generation) setup.
- It has been trained on a relatively raw dataset [Loquace-102K](https://huggingface.co/datasets/cosimoiaia/Loquace-102k) using QLoRa and WizardLM-13B-Instruct as base.
- Training took only 8 hours on a 3090, costing a little more than <b>2 euro</b>! On [Genesis Cloud](https://gnsiscld.co/26qhlf) GPU.
- It is <b><i>Truly Open Source</i></b>: Model, Dataset and Code to replicate the results are completely released.
- Created in a garage in the south of Italy.
The Loquace Italian LLM models are created with the goal of democratizing AI and LLM in the Italian Landscape.
<b>No more need for expensive GPU, large funding, Big Corporation or Ivory Tower Institution, just download the code and train on your dataset on your own PC (or a cheap and reliable cloud provider like [Genesis Cloud](https://gnsiscld.co/26qhlf) )</b>
### Fine-tuning Instructions:
The related code can be found at:
https://github.com/cosimoiaia/Loquace
## Inference:
```python
from transformers import LlamaForCausalLM, AutoTokenizer
def generate_prompt(instruction):
prompt = f"""### Instruction: {instruction}
### Response:
"""
return prompt
model_name = "."
model = LlamaForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
model.config.use_cache = True
tokenizer = AutoTokenizer.from_pretrained(model_name, add_eos_token=False)
prompt = generate_prompt("Chi era Dante Alighieri?")
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=2046, early_stopping = True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip())
```
## Model Author:
Cosimo Iaia <cosimo.iaia@gmail.com>