Text Generation
Transformers
Safetensors
Japanese
English
llama
text-generation-inference
Inference Endpoints
ddyuudd commited on
Commit
9cf2255
1 Parent(s): 237ca1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -15,6 +15,30 @@ DPOには[Low-Rank Adaptation (LoRA)](https://huggingface.co/docs/peft/conceptua
15
  ## Requirements, Usage, Chat Template
16
 
17
  [cyberagent/calm2-7b-chat](https://huggingface.co/cyberagent/calm2-7b-chat)と同様です。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ## 実験結果
20
 
@@ -46,6 +70,9 @@ DPOには[Low-Rank Adaptation (LoRA)](https://huggingface.co/docs/peft/conceptua
46
  | stem | 6.3 | 6.2 |
47
  | writing | 7.7 | 9.1 |
48
 
 
 
 
49
 
50
  ## Author
51
 
 
15
  ## Requirements, Usage, Chat Template
16
 
17
  [cyberagent/calm2-7b-chat](https://huggingface.co/cyberagent/calm2-7b-chat)と同様です。
18
+ 同様のコード・プロンプトで動かすことができます。
19
+
20
+ ```python
21
+ import transformers
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
23
+
24
+ assert transformers.__version__ >= "4.34.1"
25
+
26
+ model = AutoModelForCausalLM.from_pretrained("cyberagent/calm2-7b-chat-dpo-experimental", device_map="auto", torch_dtype="auto")
27
+ tokenizer = AutoTokenizer.from_pretrained("cyberagent/calm2-7b-chat-dpo-experimental")
28
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
29
+
30
+ prompt = """USER: AIによって私達の暮らしはどのように変わりますか?
31
+ ASSISTANT: """
32
+
33
+ token_ids = tokenizer.encode(prompt, return_tensors="pt")
34
+ output_ids = model.generate(
35
+ input_ids=token_ids.to(model.device),
36
+ max_new_tokens=300,
37
+ do_sample=True,
38
+ temperature=0.8,
39
+ streamer=streamer,
40
+ )
41
+ ```
42
 
43
  ## 実験結果
44
 
 
70
  | stem | 6.3 | 6.2 |
71
  | writing | 7.7 | 9.1 |
72
 
73
+ ## Releases
74
+
75
+ 1.0: v1 release (Jan 24, 2024)
76
 
77
  ## Author
78