Edit model card

llama-2-ko-story-7b

llama-2-koen-story-13bλŠ” beomi/llama-2-koen-13bλ₯Ό 기반으둜 ν•œκΈ€ μ†Œμ„€ raw 데이터λ₯Ό ν•™μŠ΅μ‹œν‚¨ 기반 λͺ¨λΈμž…λ‹ˆλ‹€.

ν•™μŠ΅ 데이터

llama-2-koen-story-13bλŠ” μ•½ 167MB의 ν•œκΈ€ μ†Œμ„€ λ§λ­‰μΉ˜λ‘œ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€. μ£Όμš” 데이터셋은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€.

Source Size (MB) Link
ν•œκΈ€ μ†Œμ„€ λ§λ­‰μΉ˜ 115.0
κ³΅μœ λ§ˆλ‹Ή ν•œκ΅­ κ³ μ „ λ¬Έν•™ λ§λ­‰μΉ˜ 53.0 https://gongu.copyright.or.kr/

ν•™μŠ΅

llama-2-koen-story-13bλŠ” beomi/llama-2-koen-13bμ—μ„œ qlora둜 μΆ”κ°€ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

  • lora_alpha: 16
  • lora_dropout: 0.05
  • lora_r: 32
  • target_modules: q_proj, v_proj
  • epoch: 3
  • learning_rate: 3e-4
Downloads last month
20
Safetensors
Model size
13.2B params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.