PawanOsman
commited on
Commit
•
b07fdee
1
Parent(s):
ac1f799
Upload folder using huggingface_hub
Browse files- .gitattributes +6 -0
- README.md +174 -0
- config.json +3 -0
- llama-3-70b-instruct.Q2_K.gguf +3 -0
- llama-3-70b-instruct.Q3_K_M.gguf +3 -0
- llama-3-70b-instruct.Q4_0.gguf +3 -0
- llama-3-70b-instruct.Q4_K_M.gguf +3 -0
- llama-3-70b-instruct.Q5_0.gguf +3 -0
- llama-3-70b-instruct.Q5_K_M.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
llama-3-70b-instruct.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
llama-3-70b-instruct.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
llama-3-70b-instruct.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
llama-3-70b-instruct.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
llama-3-70b-instruct.Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
llama-3-70b-instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: llama2
|
5 |
+
tags:
|
6 |
+
- facebook
|
7 |
+
- meta
|
8 |
+
- pytorch
|
9 |
+
- llama
|
10 |
+
- llama-3
|
11 |
+
model_name: Llama 3 70B Instruct
|
12 |
+
base_model: meta-llama/Meta-Llama-3-70B-Instruct
|
13 |
+
inference: false
|
14 |
+
model_creator: Meta
|
15 |
+
model_type: llama
|
16 |
+
pipeline_tag: text-generation
|
17 |
+
prompt_template: '{prompt}
|
18 |
+
|
19 |
+
'
|
20 |
+
quantized_by: PawanKrd
|
21 |
+
---
|
22 |
+
|
23 |
+
# Llama 3 70B Instruct - GGUF
|
24 |
+
- Model creator: [Meta](https://huggingface.co/meta-llama)
|
25 |
+
- Original model: [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
|
26 |
+
|
27 |
+
<!-- description start -->
|
28 |
+
## Description
|
29 |
+
|
30 |
+
This repo contains GGUF format model files for [Meta's Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
|
31 |
+
|
32 |
+
<!-- description end -->
|
33 |
+
<!-- README_GGUF.md-about-gguf start -->
|
34 |
+
### About GGUF
|
35 |
+
|
36 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
|
37 |
+
|
38 |
+
Here is an incomplate list of clients and libraries that are known to support GGUF:
|
39 |
+
|
40 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
41 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
42 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
43 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
|
44 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
45 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
46 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
47 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
48 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
49 |
+
|
50 |
+
<!-- README_GGUF.md-how-to-download start -->
|
51 |
+
## How to download GGUF files
|
52 |
+
|
53 |
+
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
54 |
+
|
55 |
+
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
|
56 |
+
- LM Studio
|
57 |
+
- LoLLMS Web UI
|
58 |
+
- Faraday.dev
|
59 |
+
|
60 |
+
### In `text-generation-webui`
|
61 |
+
|
62 |
+
Under Download Model, you can enter the model repo: PawanKrd/Llama-3-70B-Instruct-GGUF and below it, a specific filename to download, such as: llama-3-70b-instruct.Q4_K_M.gguf.
|
63 |
+
|
64 |
+
Then click Download.
|
65 |
+
|
66 |
+
### On the command line, including multiple files at once
|
67 |
+
|
68 |
+
I recommend using the `huggingface-hub` Python library:
|
69 |
+
|
70 |
+
```shell
|
71 |
+
pip3 install huggingface-hub>=0.17.1
|
72 |
+
```
|
73 |
+
|
74 |
+
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
75 |
+
|
76 |
+
```shell
|
77 |
+
huggingface-cli download PawanKrd/Llama-3-70B-Instruct-GGUF llama-3-70b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
78 |
+
```
|
79 |
+
|
80 |
+
<details>
|
81 |
+
<summary>More advanced huggingface-cli download usage</summary>
|
82 |
+
|
83 |
+
You can also download multiple files at once with a pattern:
|
84 |
+
|
85 |
+
```shell
|
86 |
+
huggingface-cli download PawanKrd/Llama-3-70B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
87 |
+
```
|
88 |
+
|
89 |
+
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
90 |
+
|
91 |
+
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
|
92 |
+
|
93 |
+
```shell
|
94 |
+
pip3 install hf_transfer
|
95 |
+
```
|
96 |
+
|
97 |
+
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
98 |
+
|
99 |
+
```shell
|
100 |
+
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PawanKrd/Llama-3-70B-Instruct-GGUF llama-3-70b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
101 |
+
```
|
102 |
+
|
103 |
+
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
104 |
+
</details>
|
105 |
+
<!-- README_GGUF.md-how-to-download end -->
|
106 |
+
|
107 |
+
<!-- README_GGUF.md-how-to-run start -->
|
108 |
+
## Example `llama.cpp` command
|
109 |
+
|
110 |
+
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
111 |
+
|
112 |
+
```shell
|
113 |
+
./main -ngl 32 -m llama-3-70b-instruct.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
|
114 |
+
```
|
115 |
+
|
116 |
+
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
117 |
+
|
118 |
+
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
|
119 |
+
|
120 |
+
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
121 |
+
|
122 |
+
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
|
123 |
+
|
124 |
+
## How to run in `text-generation-webui`
|
125 |
+
|
126 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
|
127 |
+
|
128 |
+
## How to run from Python code
|
129 |
+
|
130 |
+
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
|
131 |
+
|
132 |
+
### How to load this model from Python using ctransformers
|
133 |
+
|
134 |
+
#### First install the package
|
135 |
+
|
136 |
+
```bash
|
137 |
+
# Base ctransformers with no GPU acceleration
|
138 |
+
pip install ctransformers>=0.2.24
|
139 |
+
# Or with CUDA GPU acceleration
|
140 |
+
pip install ctransformers[cuda]>=0.2.24
|
141 |
+
# Or with ROCm GPU acceleration
|
142 |
+
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
143 |
+
# Or with Metal GPU acceleration for macOS systems
|
144 |
+
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
145 |
+
```
|
146 |
+
|
147 |
+
#### Simple example code to load one of these GGUF models
|
148 |
+
|
149 |
+
```python
|
150 |
+
from ctransformers import AutoModelForCausalLM
|
151 |
+
|
152 |
+
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
153 |
+
llm = AutoModelForCausalLM.from_pretrained("PawanKrd/Llama-3-70B-Instruct-GGUF", model_file="llama-3-70b-instruct.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
|
154 |
+
|
155 |
+
print(llm("AI is going to"))
|
156 |
+
```
|
157 |
+
|
158 |
+
## How to use with LangChain
|
159 |
+
|
160 |
+
Here's guides on using llama-cpp-python or ctransformers with LangChain:
|
161 |
+
|
162 |
+
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
163 |
+
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
164 |
+
|
165 |
+
<!-- README_GGUF.md-how-to-run end -->
|
166 |
+
|
167 |
+
<!-- footer start -->
|
168 |
+
<!-- 200823 -->
|
169 |
+
## Discord
|
170 |
+
|
171 |
+
[Pawan.Krd's Discord server](https://discord.gg/pawan)
|
172 |
+
|
173 |
+
## Credits
|
174 |
+
This README file was initially created by [TheBlok](https://huggingface.co/TheBloke) and has been modified for this repository.
|
config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"model_type": "llama"
|
3 |
+
}
|
llama-3-70b-instruct.Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7e44d406b775f56f9a4f64bfb8daa8c37abad876ba64d6c30be5427c87ceaa0d
|
3 |
+
size 26375620672
|
llama-3-70b-instruct.Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:63210c32acffce7b064135f351d90cddbae7b00c87f5e713a83b220269f1d7aa
|
3 |
+
size 34268006464
|
llama-3-70b-instruct.Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c6664b7c0065785f7e1866be58734f6df231ad5e452c0bcd81d7674e1e03b8a6
|
3 |
+
size 39970244672
|
llama-3-70b-instruct.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3a35ef59db0585015bc60c52c1a340fe9a0518ef7e18f5ea015c690a5d4f0a64
|
3 |
+
size 42520905792
|
llama-3-70b-instruct.Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e6592837e8420e0e6180f4b1fad925f41cb56b7a6c2e16b28e8e934520d91d10
|
3 |
+
size 48657958976
|
llama-3-70b-instruct.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef73573e53da68cb3b88b067b77120a56ccbbb82dd3b0ac2004a8b946d77bba2
|
3 |
+
size 49950328896
|