Adapters
Inference Endpoints
JeremyArancio commited on
Commit
571225f
1 Parent(s): 930ee4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -2
README.md CHANGED
@@ -9,8 +9,69 @@ datasets:
9
  <h1 style='text-align: left '>LLM-Tolkien</h1>
10
  <h3 style='text-align: left '>Write your own Lord Of The Rings story!</h3>
11
 
12
- Version 1.0 / 18 May 2023
 
 
13
 
14
  This LLM is fine-tuned on [Bloom-3B](https://huggingface.co/bigscience/bloom-3b) with texts extracted from the book "[The Lord of the Rings](https://gosafir.com/mag/wp-content/uploads/2019/12/Tolkien-J.-The-lord-of-the-rings-HarperCollins-ebooks-2010.pdf)".
15
 
16
- [The complete article](https://medium.com/@jeremyarancio/fine-tune-an-llm-on-your-personal-data-create-a-the-lord-of-the-rings-storyteller-6826dd614fa9)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  <h1 style='text-align: left '>LLM-Tolkien</h1>
10
  <h3 style='text-align: left '>Write your own Lord Of The Rings story!</h3>
11
 
12
+ *Version 1.1 / 23 May 2023*
13
+
14
+ # Description
15
 
16
  This LLM is fine-tuned on [Bloom-3B](https://huggingface.co/bigscience/bloom-3b) with texts extracted from the book "[The Lord of the Rings](https://gosafir.com/mag/wp-content/uploads/2019/12/Tolkien-J.-The-lord-of-the-rings-HarperCollins-ebooks-2010.pdf)".
17
 
18
+ The article: [Fine-tune an LLM on your personal data: create a “The Lord of the Rings” storyteller.](https://medium.com/@jeremyarancio/fine-tune-an-llm-on-your-personal-data-create-a-the-lord-of-the-rings-storyteller-6826dd614fa9)
19
+
20
+ [Github repository](https://github.com/jeremyarancio/llm-rpg/tree/main)
21
+
22
+ # Load the model
23
+
24
+ ```python
25
+ from transformers import AutoModelForCausalLM, AutoTokenizer
26
+ from peft import PeftConfig, PeftModel
27
+
28
+ # Import the model
29
+ config = PeftConfig.from_pretrained("JeremyArancio/llm-tolkien")
30
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
31
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
32
+ # Load the Lora model
33
+ model = PeftModel.from_pretrained(model, hf_repo)
34
+ ```
35
+
36
+ # Run the model
37
+
38
+ ```python
39
+ prompt = "The hobbits were so suprised seeing their friend"
40
+
41
+ inputs = tokenizer(prompt, return_tensors="pt")
42
+ tokens = model.generate(
43
+ **inputs,
44
+ max_new_tokens=100,
45
+ temperature=1,
46
+ eos_token_id=tokenizer.eos_token_id,
47
+ early_stopping=True
48
+ )
49
+
50
+ # The hobbits were so suprised seeing their friend again that they did not
51
+ # speak. Aragorn looked at them, and then he turned to the others.</s>
52
+ ```
53
+
54
+ # Training parameters
55
+
56
+ ```python
57
+ # Dataset
58
+ context_length = 2048
59
+
60
+ # Training
61
+ model_name = 'bigscience/bloom-3b'
62
+ lora_r = 16 # attention heads
63
+ lora_alpha = 32 # alpha scaling
64
+ lora_dropout = 0.05
65
+ lora_bias = "none"
66
+ lora_task_type = "CAUSAL_LM" # set this for CLM or Seq2Seq
67
+
68
+ ## Trainer config
69
+ per_device_train_batch_size = 1
70
+ gradient_accumulation_steps = 1
71
+ warmup_steps = 100
72
+ num_train_epochs=3
73
+ weight_decay=0.1
74
+ learning_rate = 2e-4
75
+ fp16 = True
76
+ evaluation_strategy = "no"
77
+ ```