emozilla commited on
Commit
2d42031
1 Parent(s): b0d5977

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -7,9 +7,11 @@ datasets:
7
  This is a modified version of the original LLaMA model that incorporates Scaled Rotary Embeddings first proposed by [kaiokendev](https://kaiokendev.github.io/). By default, the model is configured to be equivalent to the original OpenLLaMA model (2048 context length). To modify, instantiate the LLaMA configuration and set `max_position_embeddings` to the desired context length. The value should be a power of 2, e.g. 2048, 4096, 8192, etc.
8
 
9
  ```python
10
- config = AutoConfig.from_pretrained("emozilla/open_llama_7b-scaled", trust_remote_code=True)
 
11
  config.max_position_embeddings = 8192
12
- model = AutoModelForCausalLM.from_pretrained("emozilla/open_llama_7b-scaled", config=config, trust_remote_code=True)
 
13
  ```
14
 
15
  You should also set `max_model_length` on your tokenizer.
 
7
  This is a modified version of the original LLaMA model that incorporates Scaled Rotary Embeddings first proposed by [kaiokendev](https://kaiokendev.github.io/). By default, the model is configured to be equivalent to the original OpenLLaMA model (2048 context length). To modify, instantiate the LLaMA configuration and set `max_position_embeddings` to the desired context length. The value should be a power of 2, e.g. 2048, 4096, 8192, etc.
8
 
9
  ```python
10
+ config = AutoConfig.from_pretrained("emozilla/open_llama_7b-scaled", \
11
+ trust_remote_code=True)
12
  config.max_position_embeddings = 8192
13
+ model = AutoModelForCausalLM.from_pretrained("emozilla/open_llama_7b-scaled", \
14
+ config=config, trust_remote_code=True)
15
  ```
16
 
17
  You should also set `max_model_length` on your tokenizer.