metadata
library_name: transformers
tags: []
Model Card for Model ID
LLM trained with QLoRA for VK NLP course.
Model Details
Model Description
The model is LLM TinyLlama/TinyLlama-1.1B-Chat-v1.0.
LLM trained with QLoRA for k, v matricies with PEFT lib.
Model Sources
- Pretrained Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Train Data: cardiffnlp/tweet_eval
Getting start
from safetensors.torch import load_file
from huggingface_hub import hf_hub_download
device = torch.device("cuda")
model = AutoModelForCausalLM.from_pretrained(f"{REPO_NAME}-tinyllama-qlora", device_map="auto")
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(f"{REPO_NAME}-tinyllama-qlora")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"