File size: 3,564 Bytes
ce4c2b2
9513622
 
ce4c2b2
9513622
ce4c2b2
9513622
 
ce4c2b2
9513622
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
title: llama2_local
app_file: llama.py
sdk: gradio
sdk_version: 3.37.0
---
# Llama2 on Your Local Computer
Run the new Llama2 and Llama2-Chat models on your local computer.

## Getting Started

### Installation

1. Clone the repository:
```
git clone https://github.com/thisserand/llama2_local.git
cd llama2_local
```

2. Install required dependencies:
```
pip install -r requirements.txt
```

### Prerequisites
To be able to download the model weights and tokenizer from Huggingface, you firtst need to visit the [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept their License (my request got approved within 30 minutes). Make sure that you state the email address that you are also using for your Huggingface account. Once your request got accepted, you need to go to one of the Llama2 Huggingface repositories (e.g., the Llama2-7B model) and request access for there again, as can be seen in the following image (access should be granted right away):
![Huggingface Llama2 Access](./images/huggingface_llama2_access.png)

Once you are all set with your access requests the last step is to login to your huggingface account in your current runtime. For this we will use the following command:
```
huggingface-cli login
```
You can find your Access Token here (https://huggingface.co/settings/tokens):
![Huggingface Access Token](./images/huggingface_access_token.png)

### Windows
Make sure that you have gcc with version >=11 installed on your computer. Here are steps described by [Kevin Anthony Kaw
](https://github.com/kevinkaw) for a successful setup of gcc:
- CMake version cmake-3.27.0-windows-x86_64.msi installed to root directory ("C:")
- minGW64 version 11.0.0 extracted to root directory ("C:")
- set environment path variables for CMake and minGW64
- install visual studio build tools. It's way at the bottom under "Tools for Visual Studio" drop down list.
- In visual studio, check the "Desktop development with c++", click install.

## Usage

### Full Precision (Original)

Llama2-7B:
```
python llama.py --model_name="meta-llama/Llama-2-7b-hf"
```
Llama2-7B-Chat:
```
python llama.py --model_name="meta-llama/Llama-2-7b-chat-hf"
```
Llama2-13B:
```
python llama.py --model_name="meta-llama/Llama-2-13b-hf"
```
Llama2-13B-Chat:
```
python llama.py --model_name="meta-llama/Llama-2-13b-chat-hf"
```
Llama2-70B:
```
python llama.py --model_name="meta-llama/Llama-2-70b-hf"
```
Llama2-70B-Chat:
```
python llama.py --model_name="meta-llama/Llama-2-70b-chat-hf"
```
### GPTQ Quantized
Llama2-7B:
```
python llama.py --model_name="TheBloke/Llama-2-7B-GPTQ"
```
Llama2-7B-Chat:
```
python llama.py --model_name="TheBloke/Llama-2-7b-Chat-GPTQ"
```
Llama2-13B:
```
python llama.py --model_name="TheBloke/Llama-2-13B-GPTQ"
```
Llama2-13B-Chat:
```
python llama.py --model_name="TheBloke/Llama-2-13B-Chat-GPTQ"
```
Llama2-70B:
```
python llama.py --model_name="TheBloke/Llama-2-70B-GPTQ"
```
Llama2-70B-Chat:
```
python llama.py --model_name="TheBloke/Llama-2-70B-Chat-GPTQ"
```
### GGML Quantized
Llama2-7B:
```
python llama.py --model_name="TheBloke/Llama-2-7B-GGML" --file_name="llama-2-7b.ggmlv3.q4_K_M.bin"
```
Llama2-7B-Chat:
```
python llama.py --model_name="TheBloke/Llama-2-7B-Chat-GGML" --file_name="llama-2-7b-chat.ggmlv3.q4_K_M.bin"
```
Llama2-13B:
```
python llama.py --model_name="TheBloke/Llama-2-13B-GGML" --file_name="llama-2-13b.ggmlv3.q4_K_M.bin"
```
Llama2-13B-Chat:
```
python llama.py --model_name="TheBloke/Llama-2-13B-Chat-GGML" --file_name="llama-2-13b-chat.ggmlv3.q4_K_M.bin"
```