tjadamlee commited on
Commit
3bc9bdc
1 Parent(s): 74cb0f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -56,15 +56,21 @@ cae7b4ee8d1ad4e4402632bb0600cc17 ./tokenizer_config.json.ef7ef410b9b909949e96f1
56
 
57
  2. Decrypt the files using the scripts in https://github.com/LianjiaTech/BELLE/tree/main/models
58
 
59
- You can use the following command in Bash:
 
 
 
 
60
  ```bash
61
- for f in "encrypted"/*; \
 
62
  do if [ -f "$f" ]; then \
63
- python3 decrypt.py "$f" "/path/to_original_llama_13B/consolidated.00.pth" "result/"; \
64
  fi; \
65
  done
66
  ```
67
 
 
68
  After executing the aforementioned command, you will obtain the following files.
69
 
70
  ```
@@ -113,7 +119,7 @@ After you decrypt the files, BELLE-LLAMA-13B-2M can be easily loaded with LlamaF
113
  from transformers import LlamaForCausalLM, AutoTokenizer
114
  import torch
115
 
116
- ckpt = './result/'
117
  device = torch.device('cuda')
118
  model = LlamaForCausalLM.from_pretrained(ckpt, device_map='auto', low_cpu_mem_usage=True)
119
  tokenizer = AutoTokenizer.from_pretrained(ckpt)
 
56
 
57
  2. Decrypt the files using the scripts in https://github.com/LianjiaTech/BELLE/tree/main/models
58
 
59
+ You can use the following command in Bash,
60
+ Please replace "/path/to_encrypted" with the path where you stored your encrypted file,
61
+ replace "/path/to_original_llama_13B" with the path where you stored your original llama13B file,
62
+ and replace "/path/to_finetuned_model" with the path where you want to save your final trained model.:
63
+
64
  ```bash
65
+ mkdir /path/to_finetuned_model
66
+ for f in "/path/to_encrypted"/*; \
67
  do if [ -f "$f" ]; then \
68
+ python3 decrypt.py "$f" "/path/to_original_llama_13B/consolidated.00.pth" "/path/to_finetuned_model/"; \
69
  fi; \
70
  done
71
  ```
72
 
73
+
74
  After executing the aforementioned command, you will obtain the following files.
75
 
76
  ```
 
119
  from transformers import LlamaForCausalLM, AutoTokenizer
120
  import torch
121
 
122
+ ckpt = './path/to_finetuned_model/'
123
  device = torch.device('cuda')
124
  model = LlamaForCausalLM.from_pretrained(ckpt, device_map='auto', low_cpu_mem_usage=True)
125
  tokenizer = AutoTokenizer.from_pretrained(ckpt)