Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Lunar_10.7B - GGUF

Name Quant method Size
Lunar_10.7B.Q2_K.gguf Q2_K 3.73GB
Lunar_10.7B.IQ3_XS.gguf IQ3_XS 4.14GB
Lunar_10.7B.IQ3_S.gguf IQ3_S 4.37GB
Lunar_10.7B.Q3_K_S.gguf Q3_K_S 4.34GB
Lunar_10.7B.IQ3_M.gguf IQ3_M 4.51GB
Lunar_10.7B.Q3_K.gguf Q3_K 4.84GB
Lunar_10.7B.Q3_K_M.gguf Q3_K_M 4.84GB
Lunar_10.7B.Q3_K_L.gguf Q3_K_L 5.26GB
Lunar_10.7B.IQ4_XS.gguf IQ4_XS 5.43GB
Lunar_10.7B.Q4_0.gguf Q4_0 5.66GB
Lunar_10.7B.IQ4_NL.gguf IQ4_NL 5.72GB
Lunar_10.7B.Q4_K_S.gguf Q4_K_S 5.7GB
Lunar_10.7B.Q4_K.gguf Q4_K 6.02GB
Lunar_10.7B.Q4_K_M.gguf Q4_K_M 6.02GB
Lunar_10.7B.Q4_1.gguf Q4_1 6.27GB
Lunar_10.7B.Q5_0.gguf Q5_0 6.89GB
Lunar_10.7B.Q5_K_S.gguf Q5_K_S 6.89GB
Lunar_10.7B.Q5_K.gguf Q5_K 7.08GB
Lunar_10.7B.Q5_K_M.gguf Q5_K_M 7.08GB
Lunar_10.7B.Q5_1.gguf Q5_1 7.51GB
Lunar_10.7B.Q6_K.gguf Q6_K 8.2GB
Lunar_10.7B.Q8_0.gguf Q8_0 10.62GB

Original model description:

language: - en license: cc-by-nc-sa-4.0 model-index: - name: Lunar_10.7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Lunar_10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Lunar_10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.23 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Lunar_10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.51 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Lunar_10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Lunar_10.7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 53.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Lunar_10.7B name: Open LLM Leaderboard

This model consists of a finetuned model of my own SLERP merged with this model: https://huggingface.co/Sao10K/Sensualize-Solar-10.7B created by https://huggingface.co/Sao10K

image/jpeg

Lunar was produced by a variety of methods for the purpose of being a companion bot capable of intimacy as well as conversation.

GGUF here: https://huggingface.co/jeiku/Lunar_10.7B_GGUF

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.25
AI2 Reasoning Challenge (25-Shot) 65.87
HellaSwag (10-Shot) 84.85
MMLU (5-Shot) 64.23
TruthfulQA (0-shot) 53.51
Winogrande (5-shot) 81.37
GSM8k (5-shot) 53.68
Downloads last month
29
GGUF
Model size
10.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .