--- language: - ko datasets: - instruction library_name: transformers pipeline_tag: text-generation license: mit --- # **etri-ones-solar** ## Model Details **Model Developers** - the model is fine-tuned by open instruction dataset **Model Architecture** - this model is an auto-regressive language model based on the solar transformer architecture. **Base Model** - solar https://huggingface.co/upstage/SOLAR-10.7B-v1.0 **Training Dataset** - --- # Model comparisons1 > comming soon | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | **[...your_model_name...]** | NaN | NaN | NaN | NaN | NaN | NaN | --- # Model comparisons2 > AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness) | Model | Copa | Copa | HellaSwag | HellaSwag | BoolQ | BoolQ | Sentineg | Sentineg | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | | **[...your_model_name...]** | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | --- # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "[...your_model_repo...]" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---