beomi commited on
Commit
a149c32
β€’
1 Parent(s): 9c1df25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md CHANGED
@@ -12,3 +12,43 @@ tags:
12
  ---
13
 
14
  Experimental Repository :)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  Experimental Repository :)
15
+
16
+ Here's some test:
17
+
18
+ ```python
19
+ from transformers import pipeline
20
+ from transformers import AutoTokenizer, AutoModelForCausalLM
21
+
22
+ model = AutoModelForCausalLM.from_pretrained(
23
+ 'beomi/Mistral-Ko-Inst-dev',
24
+ torch_dtype='auto',
25
+ device_map='auto',
26
+ )
27
+ tokenizer = AutoTokenizer.from_pretrained('beomi/Mistral-Ko-Inst-dev')
28
+
29
+ pipe = pipeline(
30
+ 'text-generation',
31
+ model=model,
32
+ tokenizer=tokenizer,
33
+ do_sample=True,
34
+ max_new_tokens=350,
35
+ return_full_text=False,
36
+ no_repeat_ngram_size=6,
37
+ eos_token_id=1, # not yet tuned to gen </s>, use <s> instead.
38
+ )
39
+
40
+
41
+ def gen(x):
42
+ chat = tokenizer.apply_chat_template([
43
+ {"role": "user", "content": x},
44
+ # {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
45
+ # {"role": "user", "content": "Do you have mayonnaise recipes? please say in Korean."}
46
+ ], tokenize=False)
47
+ print(pipe(chat)[0]['generated_text'].strip())
48
+
49
+ gen("μŠ€νƒ€λ²…μŠ€μ™€ μŠ€νƒ€λ²…μŠ€ μ½”λ¦¬μ•„μ˜ μ°¨μ΄λŠ”?")
50
+
51
+ # (생성 μ˜ˆμ‹œ)
52
+ # μŠ€νƒ€λ²…μŠ€λŠ” μ „ μ„Έκ³„μ μœΌλ‘œ μš΄μ˜ν•˜κ³  μžˆλŠ” 컀피 전문사이닀. ν•œκ΅­μ—λŠ” μŠ€νƒ€λ²…μŠ€ μ½”λ¦¬μ•„λΌλŠ” μ΄λ¦„μœΌλ‘œ 운영되고 μžˆλ‹€.
53
+ # μŠ€νƒ€λ²…μŠ€ μ½”λ¦¬μ•„λŠ” λŒ€ν•œλ―Όκ΅­μ— μž…μ ν•œ 이후 2009λ…„κ³Ό 2010년에 두 μ°¨λ‘€μ˜ λΈŒλžœλ“œκ³Όμ˜ μž¬κ²€ν†  및 μƒˆλ‘œμš΄ λ””μžμΈμ„ 톡해 μƒˆλ‘œμš΄ λΈŒλžœλ“œλ‹€. 컀피 μ „λ¬Έμ˜ 프리미엄 이미지λ₯Ό μœ μ§€ν•˜κ³  있고, μŠ€νƒ€λ²…μŠ€ μ½”λ¦¬μ•„λŠ” ν•œκ΅­μ„ λŒ€ν‘œν•˜λŠ” 프리미엄 컀피 μ „λ¬Έ λΈŒλžœλ“œμ„ λ§Œλ“€κ³  μžˆλ‹€.
54
+ ```