apepkuss79
commited on
Commit
•
b104736
1
Parent(s):
1c20e1e
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -72,7 +72,7 @@ tags:
|
|
72 |
--ctx-size 128000
|
73 |
```
|
74 |
|
75 |
-
|
76 |
|
77 |
| Name | Quant method | Bits | Size | Use case |
|
78 |
| ---- | ---- | ---- | ---- | ----- |
|
@@ -95,6 +95,6 @@ tags:
|
|
95 |
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00002-of-00005.gguf) | f16 | 16 | 29.6 GB| |
|
96 |
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00003-of-00005.gguf) | f16 | 16 | 29.6 GB| |
|
97 |
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00004-of-00005.gguf) | f16 | 16 | 29.6 GB| |
|
98 |
-
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00005-of-00005.gguf) | f16 | 16 | 22.2 GB| |
|
99 |
|
100 |
*Quantized with llama.cpp 3932.*
|
|
|
72 |
--ctx-size 128000
|
73 |
```
|
74 |
|
75 |
+
## Quantized GGUF Models
|
76 |
|
77 |
| Name | Quant method | Bits | Size | Use case |
|
78 |
| ---- | ---- | ---- | ---- | ----- |
|
|
|
95 |
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00002-of-00005.gguf) | f16 | 16 | 29.6 GB| |
|
96 |
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00003-of-00005.gguf) | f16 | 16 | 29.6 GB| |
|
97 |
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00004-of-00005.gguf) | f16 | 16 | 29.6 GB| |
|
98 |
+
| [Llama-3.1-Nemotron-70B-Reward-HF-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Llama-3.1-Nemotron-70B-Reward-HF-GGUF/blob/main/Llama-3.1-Nemotron-70B-Reward-HF-f16-00005-of-00005.gguf) | f16 | 16 | 22.2 GB| |
|
99 |
|
100 |
*Quantized with llama.cpp 3932.*
|