Kohsaku commited on
Commit
1855d93
·
verified ·
1 Parent(s): 1e7836a

first commit

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -6,17 +6,59 @@ tags:
6
  - unsloth
7
  - gemma2
8
  - trl
9
- license: apache-2.0
10
  language:
11
- - en
 
 
12
  ---
13
 
14
  # Uploaded model
15
 
16
  - **Developed by:** Kohsaku
17
- - **License:** apache-2.0
18
  - **Finetuned from model :** google/gemma-2-9b
19
 
20
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - unsloth
7
  - gemma2
8
  - trl
9
+ license: gemma
10
  language:
11
+ - en,
12
+ datasets:
13
+ - llm-jp/magpie-sft-v1.0
14
  ---
15
 
16
  # Uploaded model
17
 
18
  - **Developed by:** Kohsaku
19
+ - **License:** Gemma 2 License
20
  - **Finetuned from model :** google/gemma-2-9b
21
 
22
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
23
 
24
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
25
+
26
+
27
+ # Sample Use
28
+
29
+ ``` python
30
+
31
+ model_name = "Kohsaku/gemma-2-9b-finetune-2"
32
+
33
+ #@title README 検証用
34
+
35
+ max_seq_length = 1024
36
+
37
+ dtype = None
38
+ load_in_4bit = True
39
+
40
+ model, tokenizer = FastLanguageModel.from_pretrained(
41
+ model_name = model_name,
42
+ max_seq_length = max_seq_length,
43
+ dtype = dtype,
44
+ load_in_4bit = load_in_4bit,
45
+ token = HF_TOKEN,
46
+ )
47
+ FastLanguageModel.for_inference(model)
48
+
49
+ text = "自然言語処理とは何か"
50
+ tokenized_input = tokenizer.encode(text, add_special_tokens=True , return_tensors="pt").to(model.device)
51
+
52
+ # attention_maskを作成
53
+ # attention_mask = torch.ones(tokenized_input.shape, device=model.device)
54
+
55
+ with torch.no_grad():
56
+ output = model.generate(
57
+ tokenized_input,
58
+ max_new_tokens = 1024,
59
+ use_cache = True,
60
+ do_sample=False,
61
+ repetition_penalty=1.2
62
+ )[0]
63
+ print(tokenizer.decode(output))
64
+ ```