Loquace LLMs v0.1
Collection
The Loquace family of Italian LLMs
โข
6 items
โข
Updated
โข
2
Model Card for Loquace-Wizard-13B
Loquace is an Italian speaking, instruction finetuned, Large Language model. ๐ฎ๐น
Loquace-Wizard-14B's peculiar features:
The Loquace Italian LLM models are created with the goal of democratizing AI and LLM in the Italian Landscape.
No more need for expensive GPU, large funding, Big Corporation or Ivory Tower Institution, just download the code and train on your dataset on your own PC (or a cheap and reliable cloud provider like Genesis Cloud )
The related code can be found at: https://github.com/cosimoiaia/Loquace
from transformers import LlamaForCausalLM, AutoTokenizer
def generate_prompt(instruction):
prompt = f"""### Instruction: {instruction}
### Response:
"""
return prompt
model_name = "."
model = LlamaForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
model.config.use_cache = True
tokenizer = AutoTokenizer.from_pretrained(model_name, add_eos_token=False)
prompt = generate_prompt("Chi era Dante Alighieri?")
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=2046, early_stopping = True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip())
Cosimo Iaia cosimo.iaia@gmail.com