Edit model card

This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.

The three models were combined, and the model with the best ARC and MMLU performance was given the highest weight.

First: fine-tuning beaugogh/openorca-multiplechoice-10k on llama2 7b, but using the NEFTune method.

Second: model fine-tuned with the SlimOrca dataset on llama2 7b.

Third : Model with beaugogh/openorca-multiplechoice-10k fine-tuned on llama2 7b.

We'll add the results once we have the official results

Downloads last month
1,292
Safetensors
Model size
6.74B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train yeen214/llama2_7b_merge_orcafamily