Edit model card

gemma-7b-open-platypus-commercial

Model Details

Base Model

Training Dataset

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "grayhacker91/gemma-7b-open-platypus-commercial"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
1,887
Safetensors
Model size
8.54B params
Tensor type
FP16
·

Finetuned from

Dataset used to train grayhacker91/gemma-7b-open-platypus-commercial