Upload folder using huggingface_hub
Browse files- .gitattributes +12 -0
- README.md +47 -0
- amd-Meta-Llama-3-8B_fp8_quark-IQ4_XS.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q2_K.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q3_K_L.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q3_K_M.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q3_K_S.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q4_K_M.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q4_K_S.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q5_K_M.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q5_K_S.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q6_K.gguf +3 -0
- amd-Meta-Llama-3-8B_fp8_quark-Q8_0.gguf +3 -0
- featherless-quants.png +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
amd-Meta-Llama-3-8B_fp8_quark-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
amd-Meta-Llama-3-8B_fp8_quark-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
featherless-quants.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: amd-Meta-Llama-3-8B_fp8_quark
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
quantized_by: featherless-ai-quants
|
5 |
+
---
|
6 |
+
|
7 |
+
# amd-Meta-Llama-3-8B_fp8_quark GGUF Quantizations π
|
8 |
+
|
9 |
+
![Featherless AI Quants](./featherless-quants.png)
|
10 |
+
|
11 |
+
*Optimized GGUF quantization files for enhanced model performance*
|
12 |
+
|
13 |
+
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
|
14 |
+
---
|
15 |
+
|
16 |
+
## Available Quantizations π
|
17 |
+
|
18 |
+
| Quantization Type | File | Size |
|
19 |
+
|-------------------|------|------|
|
20 |
+
| IQ4_XS | [amd-Meta-Llama-3-8B_fp8_quark-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-IQ4_XS.gguf) | 4276.62 MB |
|
21 |
+
| Q2_K | [amd-Meta-Llama-3-8B_fp8_quark-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q2_K.gguf) | 3031.86 MB |
|
22 |
+
| Q3_K_L | [amd-Meta-Llama-3-8B_fp8_quark-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q3_K_L.gguf) | 4121.74 MB |
|
23 |
+
| Q3_K_M | [amd-Meta-Llama-3-8B_fp8_quark-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q3_K_M.gguf) | 3832.74 MB |
|
24 |
+
| Q3_K_S | [amd-Meta-Llama-3-8B_fp8_quark-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q3_K_S.gguf) | 3494.74 MB |
|
25 |
+
| Q4_K_M | [amd-Meta-Llama-3-8B_fp8_quark-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q4_K_M.gguf) | 4692.78 MB |
|
26 |
+
| Q4_K_S | [amd-Meta-Llama-3-8B_fp8_quark-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q4_K_S.gguf) | 4475.28 MB |
|
27 |
+
| Q5_K_M | [amd-Meta-Llama-3-8B_fp8_quark-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q5_K_M.gguf) | 5467.40 MB |
|
28 |
+
| Q5_K_S | [amd-Meta-Llama-3-8B_fp8_quark-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q5_K_S.gguf) | 5339.90 MB |
|
29 |
+
| Q6_K | [amd-Meta-Llama-3-8B_fp8_quark-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q6_K.gguf) | 6290.44 MB |
|
30 |
+
| Q8_0 | [amd-Meta-Llama-3-8B_fp8_quark-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/amd-Meta-Llama-3-8B_fp8_quark-GGUF/blob/main/amd-Meta-Llama-3-8B_fp8_quark-Q8_0.gguf) | 8145.11 MB |
|
31 |
+
|
32 |
+
|
33 |
+
---
|
34 |
+
|
35 |
+
## β‘ Powered by [Featherless AI](https://featherless.ai)
|
36 |
+
|
37 |
+
### Key Features
|
38 |
+
|
39 |
+
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
|
40 |
+
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
|
41 |
+
- π **Vast Compatibility** - Support for 2400+ models and counting
|
42 |
+
- π **Affordable Pricing** - Starting at just $10/month
|
43 |
+
|
44 |
+
---
|
45 |
+
|
46 |
+
**Links:**
|
47 |
+
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
amd-Meta-Llama-3-8B_fp8_quark-IQ4_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1ac02036df3b710cdb77344450faaf844760b78af1ab31680d79e4e620939e97
|
3 |
+
size 4484362272
|
amd-Meta-Llama-3-8B_fp8_quark-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11305944aaad91c4faeb94e5ccf8378381944e8d567d00eac9f5b955e19c9403
|
3 |
+
size 3179130912
|
amd-Meta-Llama-3-8B_fp8_quark-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:648b8e9bf97ff611a6e6aacd1d19b39c128a005ba3153a879284aeb010d7e212
|
3 |
+
size 4321955872
|
amd-Meta-Llama-3-8B_fp8_quark-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2e6f83f04505314672d476be07760b8bb4cd97415a0971ef3b59916c22a59ddc
|
3 |
+
size 4018917408
|
amd-Meta-Llama-3-8B_fp8_quark-Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:027eba046703f0d28ec5a6ed6c3ca0bd216a1fed11720dc3568974cad687af20
|
3 |
+
size 3664498720
|
amd-Meta-Llama-3-8B_fp8_quark-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5eb1a8f74a9c345df722892db5b41a8543c74c55d1c6aaf1fdbbedd6a95c43d8
|
3 |
+
size 4920733728
|
amd-Meta-Llama-3-8B_fp8_quark-Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22fd66cedce72fb0ceb2e9f6be0281febb03fc20ed39e94305e8e4b36079b419
|
3 |
+
size 4692668448
|
amd-Meta-Llama-3-8B_fp8_quark-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d87fb95b26396d4169984fca38b96f5c1c1ed13e3da90b642d6d7a49d2969e9e
|
3 |
+
size 5732986912
|
amd-Meta-Llama-3-8B_fp8_quark-Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ff336e52a9b519a1c7af902c56eb7e9998a3228e2371b56223a49ea74c74a3e6
|
3 |
+
size 5599293472
|
amd-Meta-Llama-3-8B_fp8_quark-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:88456782ff4abf2efccd92217786784c68cd9069eebd48fb2c3897fd034a48fa
|
3 |
+
size 6596005920
|
amd-Meta-Llama-3-8B_fp8_quark-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:682ee3430f9b5ffe9e45512f0765235bec721394336c2d49fb27bb318c941e68
|
3 |
+
size 8540770336
|
featherless-quants.png
ADDED
Git LFS Details
|