metadata
license: bigscience-bloom-rail-1.0
language: '-en'
library_name: adapter-transformers
datasets:
- JeremyArancio/lotr-book
LLM-Tolkien
Write your own Lord Of The Rings story!
Version 1.1 / 23 May 2023
Description
This LLM is fine-tuned on Bloom-3B with texts extracted from the book "The Lord of the Rings".
The article: Fine-tune an LLM on your personal data: create a “The Lord of the Rings” storyteller.
Load the model
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftConfig, PeftModel
# Import the model
config = PeftConfig.from_pretrained("JeremyArancio/llm-tolkien")
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, hf_repo)
Run the model
prompt = "The hobbits were so suprised seeing their friend"
inputs = tokenizer(prompt, return_tensors="pt")
tokens = model.generate(
**inputs,
max_new_tokens=100,
temperature=1,
eos_token_id=tokenizer.eos_token_id,
early_stopping=True
)
print(tokenizer.decode(tokens[0]))
# The hobbits were so suprised seeing their friend again that they did not
# speak. Aragorn looked at them, and then he turned to the others.</s>
Training parameters
# Dataset
context_length = 2048
# Training
model_name = 'bigscience/bloom-3b'
lora_r = 16 # attention heads
lora_alpha = 32 # alpha scaling
lora_dropout = 0.05
lora_bias = "none"
lora_task_type = "CAUSAL_LM" # set this for CLM or Seq2Seq
## Trainer config
per_device_train_batch_size = 1
gradient_accumulation_steps = 1
warmup_steps = 100
num_train_epochs=3
weight_decay=0.1
learning_rate = 2e-4
fp16 = True
evaluation_strategy = "no"