Model
base_model : yanolja/KoSOLAR-10.7B-v0.2
Dataset
- ๊ณต๊ฐ ๋ฐ์ดํฐ ์์ง
- Deduplicating Training Data Makes Language Models Better ์๊ณ ๋ฆฌ์ฆ ํ์ฉ
Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jingyeom/KoSoLAR-10.7B-v0.2_1.4_dedup"
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Benchmark
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support