abhi-mosaic commited on
Commit
4e61cee
1 Parent(s): 487de08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -6
README.md CHANGED
@@ -49,14 +49,19 @@ It includes options for many training efficiency features such as [FlashAttentio
49
 
50
  ```python
51
  import transformers
52
- model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-instruct', trust_remote_code=True, torch_dtype=torch.bfloat16)
53
  ```
 
 
 
54
 
55
- To use the optimized triton implementation of FlashAttention, you can load with `attn_impl='triton'` and move the model to `bfloat16` like so:
56
-
57
  ```python
58
- model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-instruct', trust_remote_code=True, torch_dtype=torch.bfloat16, attn_impl='triton')
59
- model.to(device='cuda:0', dtype=torch.bfloat16)
 
 
 
60
  ```
61
 
62
  Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
@@ -65,7 +70,6 @@ Although the model was trained with a sequence length of 2048, ALiBi enables use
65
  config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)
66
  config.update({"max_seq_len": 4096})
67
  model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', config=config, trust_remote_code=True)
68
- ```
69
 
70
  This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
71
 
 
49
 
50
  ```python
51
  import transformers
52
+ model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-instruct', trust_remote_code=True)
53
  ```
54
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
55
+ This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
56
+ `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
57
 
58
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
 
59
  ```python
60
+ config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b-instruct', trust_remote_code=True)
61
+ config.attn_config['attn_impl'] = 'triton'
62
+
63
+ model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-instruct', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
64
+ model.to(device='cuda:0')
65
  ```
66
 
67
  Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
 
70
  config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)
71
  config.update({"max_seq_len": 4096})
72
  model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', config=config, trust_remote_code=True)
 
73
 
74
  This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
75