Edit model card

we fine-tune this model based on mistral-7b-v0.1

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("devhyun88/hyun-mistral-7b-orca-platypus-refine") model = AutoModelForCausalLM.from_pretrained("devhyun88/hyun-mistral-7b-orca-platypus-refine")

Downloads last month
1,210
Safetensors
Model size
7.24B params
Tensor type
FP16
·