Edit model card

SlimOrca-Llama-3-8B: A General Purpose Intelligent Model

This Model is trained on refined version of SlimOrca made available by Open-Orca team. This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis, Context & Hypothesis, Reviews, Erotic story generation etc. To a certain extent it can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you generate.

It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this data.

I have used ChatML prompt format.

All the credit goes to the Open-Orca team for releasing SlimOrca dataset.

Check examples given below.

Training:

Entire dataset was trained on 4 x A100 80GB. For 2 epoch, training took almost 114 hours. Axolotl & DeepSpeed codebase was used for training purpose. Entire data is trained on Llama-3 by Meta.

This is a fully fine tuned model. Links for quantized models are given below.

GGUF & Exllama

GGUF: Link

Exllama: Link

Special Thanks to Bartowski for quantizing my model.

Example Prompt:

This model uses ChatML prompt format.

<|im_start|>system
You are a helpful Assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Examples

Example 1

image/jpeg

Example 2

image/jpeg

Example 3

image/jpeg

Example 4

image/jpeg

Downloads last month
718
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with ajibawa-2023/SlimOrca-Llama-3-8B.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Datasets used to train ajibawa-2023/SlimOrca-Llama-3-8B