Update README.md
Browse files
README.md
CHANGED
@@ -10,11 +10,11 @@ language:
|
|
10 |
首先下载LLaMA原始权重,然后使用[权重转换脚本](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)转换权重。
|
11 |
```python
|
12 |
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
|
13 |
-
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/LLaMA_hf
|
14 |
```
|
15 |
## Step2:使用[解密脚本](https://github.com/icalk-nlp/EduChat/blob/main/decrypt.py)将增量权重加到原始LLaMA权重上。
|
16 |
```python
|
17 |
-
python ./decrypt.py --base /path/to/LLAMA_hf --target ./educhat-base-002-7b-decrypt --delta /path/to/educhat-base-002-7b
|
18 |
```
|
19 |
# 使用示例
|
20 |
转换权重后,使用示例请参考:https://github.com/icalk-nlp/EduChat#%E4%BD%BF%E7%94%A8%E7%A4%BA%E4%BE%8B
|
|
|
10 |
首先下载LLaMA原始权重,然后使用[权重转换脚本](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)转换权重。
|
11 |
```python
|
12 |
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
|
13 |
+
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/LLaMA_hf/7B
|
14 |
```
|
15 |
## Step2:使用[解密脚本](https://github.com/icalk-nlp/EduChat/blob/main/decrypt.py)将增量权重加到原始LLaMA权重上。
|
16 |
```python
|
17 |
+
python ./decrypt.py --base /path/to/LLAMA_hf/7B --target ./educhat-base-002-7b-decrypt --delta /path/to/educhat-base-002-7b
|
18 |
```
|
19 |
# 使用示例
|
20 |
转换权重后,使用示例请参考:https://github.com/icalk-nlp/EduChat#%E4%BD%BF%E7%94%A8%E7%A4%BA%E4%BE%8B
|