Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: chargoddard/SmolLlamix-8x101M
|
3 |
+
datasets:
|
4 |
+
- togethercomputer/RedPajama-Data-1T-Sample
|
5 |
+
inference: false
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
license: apache-2.0
|
9 |
+
model_creator: chargoddard
|
10 |
+
model_name: SmolLlamix-8x101M
|
11 |
+
pipeline_tag: text-generation
|
12 |
+
quantized_by: afrideva
|
13 |
+
tags:
|
14 |
+
- llama
|
15 |
+
- gguf
|
16 |
+
- ggml
|
17 |
+
- quantized
|
18 |
+
- q2_k
|
19 |
+
- q3_k_m
|
20 |
+
- q4_k_m
|
21 |
+
- q5_k_m
|
22 |
+
- q6_k
|
23 |
+
- q8_0
|
24 |
+
---
|
25 |
+
# chargoddard/SmolLlamix-8x101M-GGUF
|
26 |
+
|
27 |
+
Quantized GGUF model files for [SmolLlamix-8x101M](https://huggingface.co/chargoddard/SmolLlamix-8x101M) from [chargoddard](https://huggingface.co/chargoddard)
|
28 |
+
|
29 |
+
|
30 |
+
| Name | Quant method | Size |
|
31 |
+
| ---- | ---- | ---- |
|
32 |
+
| [smolllamix-8x101m.fp16.gguf](https://huggingface.co/afrideva/SmolLlamix-8x101M-GGUF/resolve/main/smolllamix-8x101m.fp16.gguf) | fp16 | 798.87 MB |
|
33 |
+
| [smolllamix-8x101m.q2_k.gguf](https://huggingface.co/afrideva/SmolLlamix-8x101M-GGUF/resolve/main/smolllamix-8x101m.q2_k.gguf) | q2_k | 146.83 MB |
|
34 |
+
| [smolllamix-8x101m.q3_k_m.gguf](https://huggingface.co/afrideva/SmolLlamix-8x101M-GGUF/resolve/main/smolllamix-8x101m.q3_k_m.gguf) | q3_k_m | 184.67 MB |
|
35 |
+
| [smolllamix-8x101m.q4_k_m.gguf](https://huggingface.co/afrideva/SmolLlamix-8x101M-GGUF/resolve/main/smolllamix-8x101m.q4_k_m.gguf) | q4_k_m | 233.54 MB |
|
36 |
+
| [smolllamix-8x101m.q5_k_m.gguf](https://huggingface.co/afrideva/SmolLlamix-8x101M-GGUF/resolve/main/smolllamix-8x101m.q5_k_m.gguf) | q5_k_m | 279.97 MB |
|
37 |
+
| [smolllamix-8x101m.q6_k.gguf](https://huggingface.co/afrideva/SmolLlamix-8x101M-GGUF/resolve/main/smolllamix-8x101m.q6_k.gguf) | q6_k | 329.31 MB |
|
38 |
+
| [smolllamix-8x101m.q8_0.gguf](https://huggingface.co/afrideva/SmolLlamix-8x101M-GGUF/resolve/main/smolllamix-8x101m.q8_0.gguf) | q8_0 | 425.26 MB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
## Original Model Card:
|
43 |
+
This is eight copies of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) ensembled into a Mixtral model, then trained very briefly on a small subset of RedPajama. Mostly just an experiment to demonstrate that training it works at all.
|
44 |
+
|
45 |
+
It's very, very smart. Probably the smartest model ever made. Better than GPT-5. See its thoughts on the internet:
|
46 |
+
|
47 |
+
> In a world where the internet is so much more than a web browser, it's also very important to have a good understanding of how the internet works.
|
48 |
+
> The first thing we need to do is to understand what the internet looks like and what the future looks like. We can use the internet to look at the internet's history, but we don't want to go into detail about the history of the internet. The internet was created by the internet's history, which is often called the history of the internet. It was originally developed as a way for people to learn about the internet, but it wasn't until the 1960s that the internet became a place to work. Today, the internet is used in many ways, from the internet's history to the internet itself.
|