Edit model card

Inairtra-7B

Model Size: 7B

A experimental (and beginner) model merge using Intel's Neural Chat 7B

Model Details

Trained on: Intel Xeon E5-2693v3 | NVIDIA RTX 2080 Ti | 128 GB DDR4 (yes I'm poor :( )

The Inairtra-7B LLM is a LLM made by Bronya Rand (bronya_rand / Bronya-Rand) as a beginning learning model to merging models using MergeKit and GGUF quantization. This model is based off Intel's Neural Chat 7B V3.1 as the base model along with three additional Mistral models.

The Inairtra-7B architecture is based off: Mistral

The models used to create the Inairtra-7B are as follows:

Prompt

The Inairtra-7B should (but unsure) support the same prompts as featured in Intel's Neural Chat, Airoboros Mistral and Synatra.

For Intel

### System:
{system}
### User:
{usr}
### Assistant:

For Airoboros

USER: <prompt>
ASSISTANT:

Benchmarks?

I have no idea how to do them. You are welcome to make your own.

Ethical Considerations and Limitations

The intended use-case for the Inairtra-7B LLM is for fictional writing/roleplay solely for personal entertainment purposes. Any other sort of usage outside of this is out of scope of my intentions and the LLM itself.

The Inairtra-7B LLM has been merged with models which are uncensored/unfiltered. The LLM can produce content, including but not limited to, content that may be NSFW for those under the age of eighteen, content that may be illegal in certain states/countries, offensive content, etc.

The Inairtra-7B LLM is not designed to produce the most accurate information. It may produce incorrect data like all other AI models.

Disclaimer

The license on this model does not constitute legal advice. I am not responsible for the actions of third parties (services/users/etc.) who use this model and distribute it for others. Please cosult an attorney before using this model for commercial purposes.

Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
FP16
·