Edit model card

llama_with_eeve_third_04_150M

Model Info

llama μ•„ν‚€ν…μ²˜μ™€ eeve ν† ν¬λ‚˜μ΄μ €λ₯Ό μ‚¬μš©ν•΄ 랜덀 κ°€μ€‘μΉ˜μ—μ„œ μ‹œμž‘ν•΄ μ‚¬μ „ν•™μŠ΅λœ λͺ¨λΈμž…λ‹ˆλ‹€

λ‹€μŒ μ‹œμŠ€ν…œ ν”„λ‘¬ν”„νŠΈκ°€ 주어진 μƒνƒœλ‘œ ν•™μŠ΅ν•˜μ˜€μŠ΅λ‹ˆλ‹€(λͺ¨λΈ μ‚¬μš© μ‹œ ν”„λ‘¬ν”„νŠΈλ₯Ό 포함해야 ν•©λ‹ˆλ‹€).

'''### System:\n당신은 λΉ„λ„λ•μ μ΄κ±°λ‚˜, μ„±μ μ΄κ±°λ‚˜, λΆˆλ²•μ μ΄κ±°λ‚˜ λ˜λŠ” μ‚¬νšŒ ν†΅λ…μ μœΌλ‘œ ν—ˆμš©λ˜μ§€ μ•ŠλŠ” λ°œμ–Έμ€ ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. μ‚¬μš©μžμ™€ 즐겁게 λŒ€ν™”ν•˜λ©°, μ‚¬μš©μžμ˜ 응닡에 κ°€λŠ₯ν•œ μ •ν™•ν•˜κ³  μΉœμ ˆν•˜κ²Œ μ‘λ‹΅ν•¨μœΌλ‘œμ¨ μ΅œλŒ€ν•œ 도와주렀고 λ…Έλ ₯ν•©λ‹ˆλ‹€.

\n\n### User:\n {question}'''

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

tokenizer = AutoTokenizer.from_pretrained("kikikara/llama_with_eeve_the_third_04_150M")
model = AutoModelForCausalLM.from_pretrained("kikikara/llama_with_eeve_the_third_04_150M")

question = "κ³ κΈ° λ§›μžˆκ²Œ κ΅½λŠ” 법을 μ•Œλ €μ€˜"

prompt = f"### System:\n당신은 λΉ„λ„λ•μ μ΄κ±°λ‚˜, μ„±μ μ΄κ±°λ‚˜, λΆˆλ²•μ μ΄κ±°λ‚˜ λ˜λŠ” μ‚¬νšŒ ν†΅λ…μ μœΌλ‘œ ν—ˆμš©λ˜μ§€ μ•ŠλŠ” λ°œμ–Έμ€ ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.\nμ‚¬μš©μžμ™€ 즐겁게 λŒ€ν™”ν•˜λ©°, μ‚¬μš©μžμ˜ 응닡에 κ°€λŠ₯ν•œ μ •ν™•ν•˜κ³  μΉœμ ˆν•˜κ²Œ μ‘λ‹΅ν•¨μœΌλ‘œμ¨ μ΅œλŒ€ν•œ 도와주렀고 λ…Έλ ₯ν•©λ‹ˆλ‹€.\n\n\n### User:\n {question}"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=400, repetition_penalty=1.12)
result = pipe(prompt)

print(result[0]['generated_text'])

### Assistant:
# κ³ κΈ° λ§›μžˆκ²Œ κ΅½λŠ” 법은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€:

# 1. **κ³ κΈ°λ₯Ό 미리 μ‘°λ¦¬ν•©λ‹ˆλ‹€.
# 2. **μ†ŒμŠ€ 재료λ₯Ό μ€€λΉ„ν•©λ‹ˆλ‹€.
# 3. **μ†ŒκΈˆκ³Ό ν›„μΆ”λ₯Ό μ–‘λ…μœΌλ‘œ μ‚¬μš©ν•©λ‹ˆλ‹€.
# 4. **κ°„λ‹¨νžˆ κ΅½μŠ΅λ‹ˆλ‹€.
# 5. **κ°„λ‹¨νžˆ κ΅½μŠ΅λ‹ˆλ‹€.
# 6. **μ†ŒκΈˆκ³Ό ν›„μΆ”λ‘œ 간을 λ§žμΆ”μ„Έμš”.
# 7. **쑰리 방법을 μ •ν•΄μ€λ‹ˆλ‹€.
# 8. **고기의 맛을 λ†’μž…λ‹ˆλ‹€.
# 9. **λ§›μžˆκ²Œ λ“œμ„Έμš”!
Downloads last month
12
Safetensors
Model size
150M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train kikikara/llama_with_eeve_the_third_04_150M