emozilla commited on
Commit
b0d5977
1 Parent(s): 8116c39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,9 +7,9 @@ datasets:
7
  This is a modified version of the original LLaMA model that incorporates Scaled Rotary Embeddings first proposed by [kaiokendev](https://kaiokendev.github.io/). By default, the model is configured to be equivalent to the original OpenLLaMA model (2048 context length). To modify, instantiate the LLaMA configuration and set `max_position_embeddings` to the desired context length. The value should be a power of 2, e.g. 2048, 4096, 8192, etc.
8
 
9
  ```python
10
- config = AutoConfig.from_pretrained("emozilla/open_llama_7b-scaled")
11
  config.max_position_embeddings = 8192
12
- model = AutoModelForCausalLM.from_pretrained("emozilla/open_llama_7b-scaled", config=config)
13
  ```
14
 
15
  You should also set `max_model_length` on your tokenizer.
 
7
  This is a modified version of the original LLaMA model that incorporates Scaled Rotary Embeddings first proposed by [kaiokendev](https://kaiokendev.github.io/). By default, the model is configured to be equivalent to the original OpenLLaMA model (2048 context length). To modify, instantiate the LLaMA configuration and set `max_position_embeddings` to the desired context length. The value should be a power of 2, e.g. 2048, 4096, 8192, etc.
8
 
9
  ```python
10
+ config = AutoConfig.from_pretrained("emozilla/open_llama_7b-scaled", trust_remote_code=True)
11
  config.max_position_embeddings = 8192
12
+ model = AutoModelForCausalLM.from_pretrained("emozilla/open_llama_7b-scaled", config=config, trust_remote_code=True)
13
  ```
14
 
15
  You should also set `max_model_length` on your tokenizer.