ak0327 commited on
Commit
0f3cf73
·
verified ·
1 Parent(s): 609a718

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -21,3 +21,42 @@ language:
21
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
24
+
25
+
26
+ # How to use
27
+
28
+ ```Python
29
+
30
+ def load_model(model_name):
31
+ # QLoRA config
32
+ bnb_config = BitsAndBytesConfig(
33
+ load_in_4bit=True,
34
+ bnb_4bit_quant_type="nf4",
35
+ bnb_4bit_compute_dtype=torch.bfloat16,
36
+ bnb_4bit_use_double_quant=False,
37
+ )
38
+
39
+ # Load model
40
+ model = AutoModelForCausalLM.from_pretrained(
41
+ model_name,
42
+ quantization_config=bnb_config,
43
+ device_map="auto",
44
+ token=HF_TOKEN
45
+ )
46
+
47
+ # Load tokenizer
48
+ tokenizer = AutoTokenizer.from_pretrained(
49
+ model_name,
50
+ trust_remote_code=True,
51
+ token=HF_TOKEN
52
+ )
53
+ return model, tokenizer
54
+
55
+
56
+ model_name = "ak0327/llm-jp-3-13b-ft-5"
57
+
58
+ model, tokenizer = load_model(model_name)
59
+ datasets = load_test_datasets()
60
+ results = inference(model_name, datasets, model, tokenizer)
61
+ ```
62
+