Model Details
Model Architecture:
urLLM-KO-7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2-7b.
Training Corpus
The model was trained using selected datasets from Modu Corpus and Korean Wikipedia (approximately 28GB).
Vocab Expansion
The expanded vocab size is 51385.
Model Card Contact
For errors or additional questions about details in this model card, contact pkchae@urp.kr .
- Downloads last month
- 1,777
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.