Edit model card

Note: We're looking for funds, it is getting harder to keep our free-inference up. We've been serving a lot of GPU time to KoboldAI & we've trained our LLMs on CPUs, because our GPUs can't handle it. We're looking for funds to replace our two GTX1060 3GBs in order to provide better, faster inference and train models more efficiently, and overall, to keep this startup running. Any help is appreciated:

Note about this model: We're abandoning this model, because we got no money left to make a model this large perform even average on most basic Tasks. If you want to support us, consider donating on the links above. This will help us create models from scratch, assuring they perform best for what they are built.

image/png

Before you download this model, you can try it out on our website, for free, without any login, etc. The inference may be slow, you can support us by donating on the links above. Try out Atheria on:

About this model:

  • Name: Atheria
  • Version: 0.15
  • IsStable: No
  • IsUsable: Yes
  • Param Count: ~7B.
  • Type: Text-Generation
  • Finetuned on: XT_Atheria-V0.1.
  • GGUF Quant: Q8.

Improvements:

  • Better at instructing, still not good enough tho.
  • Better NLP

Plans for the next version:

  • Average NLP
  • Average Instructing
  • Excellent creativity

Scope of use:

  • Math
  • Basic Coding
  • Reasoning
  • NLP
  • Basic roleplaying
  • General Q & A.
  • Private use

Out of scope use:

  • Illegal Q & A
  • Production

The prompt format used is Vicuna. The model may make more mistakes than expected, we will fix this when we get the newer GPUs.

Check out our

We wish you a rememberable chat with Atheria!

Downloads last month
18
GGUF
Model size
6.91B params
Architecture
llama
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train XeTute/Atheria-V0.15