Edit model card

SOLAR-10.7B-v1.0-Instruct

Model Details

Model Developers

  • myeonghoon kim

Model Architecture

  • SOLAR-10.7B-v1.0-Instruct is an auto-regressive language model based on the LLaMA2 transformer architecture.

Base Model

Training Dataset


Model comparisons1

Ko-LLM leaderboard(11/23; link)

Model Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
[...your_model_name...] NaN NaN NaN NaN NaN NaN

Model comparisons2

AI-Harness evaluation; link

Model Copa Copa HellaSwag HellaSwag BoolQ BoolQ Sentineg Sentineg
0-shot 5-shot 0-shot 5-shot 0-shot 5-shot 0-shot 5-shot
SOLAR-10.7B-v1.0-Instruct NaN NaN NaN NaN NaN NaN NaN NaN

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "[...your_model_repo...]"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
10
Safetensors
Model size
10.7B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.