Jordancole21
commited on
Commit
•
03c6c5b
1
Parent(s):
d815692
Upload folder using huggingface_hub
Browse files- README.md +196 -0
- adapt_tokenizer.py +41 -0
- attention.py +519 -0
- blocks.py +46 -0
- config.json +52 -0
- configuration_mpt.py +118 -0
- flash_attn_triton.py +479 -0
- generation_config.json +5 -0
- hf_prefixlm_converter.py +415 -0
- is_torch_version.py +56 -0
- meta_init_context.py +94 -0
- modeling_mpt.py +351 -0
- norm.py +56 -0
- param_init_fns.py +181 -0
- pytorch_model-00001-of-00002.bin +3 -0
- pytorch_model-00002-of-00002.bin +3 -0
- pytorch_model.bin.index.json +201 -0
- special_tokens_map.json +5 -0
- tokenizer.json +0 -0
- tokenizer_config.json +9 -0
README.md
ADDED
@@ -0,0 +1,196 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- Composer
|
5 |
+
- MosaicML
|
6 |
+
- llm-foundry
|
7 |
+
datasets:
|
8 |
+
- the_pile_books3
|
9 |
+
inference: false
|
10 |
+
---
|
11 |
+
|
12 |
+
# MPT-7B-StoryWriter-65k+
|
13 |
+
|
14 |
+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
|
15 |
+
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
|
16 |
+
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
|
17 |
+
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
|
18 |
+
* License: Apache 2.0
|
19 |
+
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter)
|
20 |
+
|
21 |
+
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
|
22 |
+
|
23 |
+
## Model Date
|
24 |
+
|
25 |
+
May 5, 2023
|
26 |
+
|
27 |
+
## Model License
|
28 |
+
|
29 |
+
Apache 2.0
|
30 |
+
|
31 |
+
## Documentation
|
32 |
+
|
33 |
+
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
|
34 |
+
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
|
35 |
+
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
|
36 |
+
|
37 |
+
|
38 |
+
## How to Use
|
39 |
+
|
40 |
+
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
|
41 |
+
|
42 |
+
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
|
43 |
+
|
44 |
+
```python
|
45 |
+
import transformers
|
46 |
+
model = transformers.AutoModelForCausalLM.from_pretrained(
|
47 |
+
'mosaicml/mpt-7b-storywriter',
|
48 |
+
trust_remote_code=True
|
49 |
+
)
|
50 |
+
```
|
51 |
+
|
52 |
+
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
|
53 |
+
```python
|
54 |
+
config = transformers.AutoConfig.from_pretrained(
|
55 |
+
'mosaicml/mpt-7b-storywriter',
|
56 |
+
trust_remote_code=True
|
57 |
+
)
|
58 |
+
config.attn_config['attn_impl'] = 'triton'
|
59 |
+
|
60 |
+
model = transformers.AutoModelForCausalLM.from_pretrained(
|
61 |
+
'mosaicml/mpt-7b-storywriter',
|
62 |
+
config=config,
|
63 |
+
torch_dtype=torch.bfloat16,
|
64 |
+
trust_remote_code=True
|
65 |
+
)
|
66 |
+
model.to(device='cuda:0')
|
67 |
+
```
|
68 |
+
|
69 |
+
Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536,
|
70 |
+
ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
|
71 |
+
|
72 |
+
```python
|
73 |
+
config = transformers.AutoConfig.from_pretrained(
|
74 |
+
'mosaicml/mpt-7b-storywriter',
|
75 |
+
trust_remote_code=True
|
76 |
+
)
|
77 |
+
config.update({"max_seq_len": 83968})
|
78 |
+
model = transformers.AutoModelForCausalLM.from_pretrained(
|
79 |
+
'mosaicml/mpt-7b-storywriter',
|
80 |
+
config=config,
|
81 |
+
trust_remote_code=True
|
82 |
+
)
|
83 |
+
```
|
84 |
+
|
85 |
+
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
|
86 |
+
|
87 |
+
```python
|
88 |
+
from transformers import AutoTokenizer
|
89 |
+
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
|
90 |
+
```
|
91 |
+
|
92 |
+
## Community-Created Resources
|
93 |
+
|
94 |
+
These were not created by MosaicML, but you may find them useful. These links are not an endorsement of the creators or their content.
|
95 |
+
|
96 |
+
- [Oobabooga Running MPT-7B-Storywriter](https://youtu.be/QVVb6Md6huA)
|
97 |
+
- [NEW MPT-7B-StoryWriter CRUSHES GPT-4!](https://www.youtube.com/watch?v=O9Y_ZdsuKWQ&t=649s) - Has a long section on running locally using Oobabooga
|
98 |
+
|
99 |
+
## Example Epilogue
|
100 |
+
|
101 |
+
The full text of the _The Great Gatsby_ (67873 tokens) was fed to the model, followed by the text "EPILOGUE"
|
102 |
+
|
103 |
+
**TheRealFitzgerald233**:
|
104 |
+
|
105 |
+
> ...
|
106 |
+
>
|
107 |
+
> So we beat on, boats against the current, borne back ceaselessly into the past
|
108 |
+
>
|
109 |
+
> EPILOGUE
|
110 |
+
|
111 |
+
**MPT-7B-StoryWriter**:
|
112 |
+
|
113 |
+
>I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.
|
114 |
+
>
|
115 |
+
>And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.
|
116 |
+
>
|
117 |
+
>And I knew that day that I didn't love her.
|
118 |
+
>
|
119 |
+
>That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.
|
120 |
+
>
|
121 |
+
>I have sorted them by date and placed them in the order in which they were written. They are very few in number.
|
122 |
+
>
|
123 |
+
>They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.
|
124 |
+
>
|
125 |
+
>There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.
|
126 |
+
>
|
127 |
+
>But a few months ago I read somewhere that he had been seen in New York. He had become a great man.
|
128 |
+
>
|
129 |
+
>And I knew that he had not changed at all.
|
130 |
+
|
131 |
+
## Model Description
|
132 |
+
|
133 |
+
The architecture is a modification of a standard decoder-only transformer.
|
134 |
+
|
135 |
+
The model has been modified from a standard transformer in the following ways:
|
136 |
+
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
|
137 |
+
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
|
138 |
+
* It does not use biases
|
139 |
+
|
140 |
+
|
141 |
+
| Hyperparameter | Value |
|
142 |
+
|----------------|-------|
|
143 |
+
|n_parameters | 6.7B |
|
144 |
+
|n_layers | 32 |
|
145 |
+
| n_heads | 32 |
|
146 |
+
| d_model | 4096 |
|
147 |
+
| vocab size | 50432 |
|
148 |
+
| sequence length | **65536** |
|
149 |
+
|
150 |
+
## PreTraining Data
|
151 |
+
|
152 |
+
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
|
153 |
+
|
154 |
+
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
|
155 |
+
|
156 |
+
### Training Configuration
|
157 |
+
|
158 |
+
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
|
159 |
+
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
|
160 |
+
|
161 |
+
## Limitations and Biases
|
162 |
+
|
163 |
+
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
|
164 |
+
|
165 |
+
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
166 |
+
MPT-7B-StoryWriter was trained on various public datasets.
|
167 |
+
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
168 |
+
|
169 |
+
|
170 |
+
## Acknowledgements
|
171 |
+
|
172 |
+
This model was finetuned by Alex Trott and the MosaicML NLP team
|
173 |
+
|
174 |
+
## MosaicML Platform
|
175 |
+
|
176 |
+
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
|
177 |
+
|
178 |
+
## Disclaimer
|
179 |
+
|
180 |
+
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
|
181 |
+
|
182 |
+
|
183 |
+
## Citation
|
184 |
+
|
185 |
+
Please cite this model using the following format:
|
186 |
+
|
187 |
+
```
|
188 |
+
@online{MosaicML2023Introducing,
|
189 |
+
author = {MosaicML NLP Team},
|
190 |
+
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
|
191 |
+
year = {2023},
|
192 |
+
url = {www.mosaicml.com/blog/mpt-7b},
|
193 |
+
note = {Accessed: 2023-03-28}, % change this date
|
194 |
+
urldate = {2023-03-28} % change this date
|
195 |
+
}
|
196 |
+
```
|
adapt_tokenizer.py
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import Union
|
2 |
+
from transformers import AutoTokenizer, PreTrainedTokenizer, PreTrainedTokenizerFast
|
3 |
+
Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
|
4 |
+
NUM_SENTINEL_TOKENS: int = 100
|
5 |
+
|
6 |
+
def adapt_tokenizer_for_denoising(tokenizer: Tokenizer):
|
7 |
+
"""Adds sentinel tokens and padding token (if missing).
|
8 |
+
|
9 |
+
Expands the tokenizer vocabulary to include sentinel tokens
|
10 |
+
used in mixture-of-denoiser tasks as well as a padding token.
|
11 |
+
|
12 |
+
All added tokens are added as special tokens. No tokens are
|
13 |
+
added if sentinel tokens and padding token already exist.
|
14 |
+
"""
|
15 |
+
sentinels_to_add = [f'<extra_id_{i}>' for i in range(NUM_SENTINEL_TOKENS)]
|
16 |
+
tokenizer.add_tokens(sentinels_to_add, special_tokens=True)
|
17 |
+
if tokenizer.pad_token is None:
|
18 |
+
tokenizer.add_tokens('<pad>', special_tokens=True)
|
19 |
+
tokenizer.pad_token = '<pad>'
|
20 |
+
assert tokenizer.pad_token_id is not None
|
21 |
+
sentinels = ''.join([f'<extra_id_{i}>' for i in range(NUM_SENTINEL_TOKENS)])
|
22 |
+
_sentinel_token_ids = tokenizer(sentinels, add_special_tokens=False).input_ids
|
23 |
+
tokenizer.sentinel_token_ids = _sentinel_token_ids
|
24 |
+
|
25 |
+
class AutoTokenizerForMOD(AutoTokenizer):
|
26 |
+
"""AutoTokenizer + Adaptation for MOD.
|
27 |
+
|
28 |
+
A simple wrapper around AutoTokenizer to make instantiating
|
29 |
+
an MOD-adapted tokenizer a bit easier.
|
30 |
+
|
31 |
+
MOD-adapted tokenizers have sentinel tokens (e.g., <extra_id_0>),
|
32 |
+
a padding token, and a property to get the token ids of the
|
33 |
+
sentinel tokens.
|
34 |
+
"""
|
35 |
+
|
36 |
+
@classmethod
|
37 |
+
def from_pretrained(cls, *args, **kwargs):
|
38 |
+
"""See `AutoTokenizer.from_pretrained` docstring."""
|
39 |
+
tokenizer = super().from_pretrained(*args, **kwargs)
|
40 |
+
adapt_tokenizer_for_denoising(tokenizer)
|
41 |
+
return tokenizer
|
attention.py
ADDED
@@ -0,0 +1,519 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""Attention layers."""
|
2 |
+
import math
|
3 |
+
import warnings
|
4 |
+
from typing import Optional, Dict, Any, NamedTuple, Protocol, Tuple, Union
|
5 |
+
import torch
|
6 |
+
import torch.nn as nn
|
7 |
+
from einops import rearrange
|
8 |
+
from packaging import version
|
9 |
+
from torch import nn
|
10 |
+
from torch.utils.checkpoint import checkpoint
|
11 |
+
from .norm import LPLayerNorm
|
12 |
+
from .is_torch_version import is_torch_version
|
13 |
+
|
14 |
+
class PastKeyValue(NamedTuple):
|
15 |
+
key: torch.Tensor
|
16 |
+
value: torch.Tensor
|
17 |
+
|
18 |
+
class AttnFnOutput(NamedTuple):
|
19 |
+
attns: torch.Tensor
|
20 |
+
attn_probs: Optional[torch.Tensor]
|
21 |
+
|
22 |
+
class AttnFn(Protocol):
|
23 |
+
def __call__(
|
24 |
+
self,
|
25 |
+
query: torch.Tensor,
|
26 |
+
key: torch.Tensor,
|
27 |
+
value: torch.Tensor,
|
28 |
+
n_heads: int,
|
29 |
+
softmax_scale: Optional[float] = None,
|
30 |
+
attn_bias: Optional[torch.Tensor] = None,
|
31 |
+
key_padding_mask: Optional[torch.ByteTensor] = None,
|
32 |
+
is_causal = False,
|
33 |
+
dropout_p = 0.0,
|
34 |
+
training = False,
|
35 |
+
needs_weights = False,
|
36 |
+
multiquery = False,
|
37 |
+
) -> AttnFnOutput: ...
|
38 |
+
|
39 |
+
class AttnFnCheckpointed(Protocol):
|
40 |
+
def __call__(
|
41 |
+
self,
|
42 |
+
query: torch.Tensor,
|
43 |
+
key: torch.Tensor,
|
44 |
+
value: torch.Tensor,
|
45 |
+
n_heads: int,
|
46 |
+
softmax_scale: Optional[float],
|
47 |
+
attn_bias: Optional[torch.Tensor],
|
48 |
+
key_padding_mask: Optional[torch.ByteTensor],
|
49 |
+
is_causal: bool,
|
50 |
+
dropout_p: float,
|
51 |
+
training: bool,
|
52 |
+
needs_weights: bool,
|
53 |
+
) -> AttnFnOutput: ...
|
54 |
+
|
55 |
+
class AttnOutput(NamedTuple):
|
56 |
+
projected_context: torch.Tensor
|
57 |
+
attn_weights: Optional[torch.Tensor]
|
58 |
+
past_key_value: Union[PastKeyValue, Tuple, None]
|
59 |
+
|
60 |
+
class Attn(Protocol):
|
61 |
+
def __call__(
|
62 |
+
self,
|
63 |
+
x: torch.Tensor,
|
64 |
+
past_key_value: Union[PastKeyValue, Tuple, None] = None,
|
65 |
+
attn_bias: Optional[torch.Tensor] = None,
|
66 |
+
attention_mask: Optional[torch.ByteTensor] = None,
|
67 |
+
is_causal = True,
|
68 |
+
needs_weights = False,
|
69 |
+
) -> AttnOutput: ...
|
70 |
+
|
71 |
+
def _reset_is_causal(num_query_tokens: int, num_key_tokens: int, original_is_causal: bool):
|
72 |
+
if original_is_causal and num_query_tokens != num_key_tokens:
|
73 |
+
if num_query_tokens != 1:
|
74 |
+
raise NotImplementedError('MPT does not support query and key with different number of tokens, unless number of query tokens is 1.')
|
75 |
+
else:
|
76 |
+
return False
|
77 |
+
return original_is_causal
|
78 |
+
|
79 |
+
def scaled_multihead_dot_product_attention(
|
80 |
+
query: torch.Tensor,
|
81 |
+
key: torch.Tensor,
|
82 |
+
value: torch.Tensor,
|
83 |
+
n_heads: int,
|
84 |
+
softmax_scale: Optional[float] = None,
|
85 |
+
attn_bias: Optional[torch.Tensor] = None,
|
86 |
+
key_padding_mask: Optional[torch.ByteTensor] = None,
|
87 |
+
is_causal = False,
|
88 |
+
dropout_p = 0.0,
|
89 |
+
training = False,
|
90 |
+
needs_weights = False,
|
91 |
+
multiquery = False,
|
92 |
+
) -> AttnFnOutput:
|
93 |
+
q = rearrange(query, 'b s (h d) -> b h s d', h=n_heads)
|
94 |
+
k = rearrange(key, 'b s (h d) -> b h d s', h=1 if multiquery else n_heads)
|
95 |
+
v = rearrange(value, 'b s (h d) -> b h s d', h=1 if multiquery else n_heads)
|
96 |
+
min_val = torch.finfo(q.dtype).min
|
97 |
+
(b, _, s_q, d) = q.shape
|
98 |
+
s_k = k.size(-1)
|
99 |
+
if softmax_scale is None:
|
100 |
+
softmax_scale = 1 / math.sqrt(d)
|
101 |
+
attn_weight = q.matmul(k) * softmax_scale
|
102 |
+
if attn_bias is not None:
|
103 |
+
if attn_bias.size(-1) != 1 and attn_bias.size(-1) != s_k or (attn_bias.size(-2) != 1 and attn_bias.size(-2) != s_q):
|
104 |
+
raise RuntimeError(f'attn_bias (shape: {attn_bias.shape}) is expected to broadcast to shape: {attn_weight.shape}.')
|
105 |
+
attn_weight = attn_weight + attn_bias
|
106 |
+
if key_padding_mask is not None:
|
107 |
+
if attn_bias is not None:
|
108 |
+
warnings.warn('Propagating key_padding_mask to the attention module ' + 'and applying it within the attention module can cause ' + 'unneccessary computation/memory usage. Consider integrating ' + 'into attn_bias once and passing that to each attention ' + 'module instead.')
|
109 |
+
attn_weight = attn_weight.masked_fill(~key_padding_mask.view((b, 1, 1, s_k)), min_val)
|
110 |
+
if is_causal:
|
111 |
+
s = max(s_q, s_k)
|
112 |
+
causal_mask = attn_weight.new_ones(s, s, dtype=torch.float16)
|
113 |
+
causal_mask = causal_mask.tril()
|
114 |
+
causal_mask = causal_mask.to(torch.bool)
|
115 |
+
causal_mask = ~causal_mask
|
116 |
+
causal_mask = causal_mask[-s_q:, -s_k:]
|
117 |
+
attn_weight = attn_weight.masked_fill(causal_mask.view(1, 1, s_q, s_k), min_val)
|
118 |
+
attn_weight = torch.softmax(attn_weight, dim=-1)
|
119 |
+
if dropout_p:
|
120 |
+
attn_weight = torch.nn.functional.dropout(attn_weight, p=dropout_p, training=training, inplace=True)
|
121 |
+
out = attn_weight.matmul(v)
|
122 |
+
out = rearrange(out, 'b h s d -> b s (h d)')
|
123 |
+
if needs_weights:
|
124 |
+
return AttnFnOutput(out, attn_weight)
|
125 |
+
return AttnFnOutput(out, None)
|
126 |
+
|
127 |
+
def check_valid_inputs(*tensors, valid_dtypes=[torch.float16, torch.bfloat16]):
|
128 |
+
for tensor in tensors:
|
129 |
+
if tensor.dtype not in valid_dtypes:
|
130 |
+
raise TypeError(f'tensor.dtype={tensor.dtype!r} must be in valid_dtypes={valid_dtypes!r}.')
|
131 |
+
if not tensor.is_cuda:
|
132 |
+
raise TypeError(f'Inputs must be cuda tensors (tensor.is_cuda={tensor.is_cuda!r}).')
|
133 |
+
|
134 |
+
def flash_attn_fn(
|
135 |
+
query: torch.Tensor,
|
136 |
+
key: torch.Tensor,
|
137 |
+
value: torch.Tensor,
|
138 |
+
n_heads: int,
|
139 |
+
softmax_scale: Optional[float] = None,
|
140 |
+
attn_bias: Optional[torch.Tensor] = None,
|
141 |
+
key_padding_mask: Optional[torch.ByteTensor] = None,
|
142 |
+
is_causal = False,
|
143 |
+
dropout_p = 0.0,
|
144 |
+
training = False,
|
145 |
+
needs_weights = False,
|
146 |
+
multiquery = False,
|
147 |
+
) -> AttnFnOutput:
|
148 |
+
try:
|
149 |
+
from flash_attn import bert_padding, flash_attn_interface
|
150 |
+
except:
|
151 |
+
raise RuntimeError('Please install flash-attn==1.0.3.post0')
|
152 |
+
check_valid_inputs(query, key, value)
|
153 |
+
if attn_bias is not None:
|
154 |
+
raise NotImplementedError(f'attn_bias not implemented for flash attn.')
|
155 |
+
(batch_size, seqlen) = query.shape[:2]
|
156 |
+
if key_padding_mask is None:
|
157 |
+
key_padding_mask = torch.ones_like(key[:, :, 0], dtype=torch.bool)
|
158 |
+
query_padding_mask = key_padding_mask[:, -query.size(1):]
|
159 |
+
(query_unpad, indices_q, cu_seqlens_q, max_seqlen_q) = bert_padding.unpad_input(query, query_padding_mask)
|
160 |
+
query_unpad = rearrange(query_unpad, 'nnz (h d) -> nnz h d', h=n_heads)
|
161 |
+
(key_unpad, _, cu_seqlens_k, max_seqlen_k) = bert_padding.unpad_input(key, key_padding_mask)
|
162 |
+
key_unpad = rearrange(key_unpad, 'nnz (h d) -> nnz h d', h=1 if multiquery else n_heads)
|
163 |
+
(value_unpad, _, _, _) = bert_padding.unpad_input(value, key_padding_mask)
|
164 |
+
value_unpad = rearrange(value_unpad, 'nnz (h d) -> nnz h d', h=1 if multiquery else n_heads)
|
165 |
+
if multiquery:
|
166 |
+
key_unpad = key_unpad.expand(key_unpad.size(0), n_heads, key_unpad.size(-1))
|
167 |
+
value_unpad = value_unpad.expand(value_unpad.size(0), n_heads, value_unpad.size(-1))
|
168 |
+
dropout_p = dropout_p if training else 0.0
|
169 |
+
reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal)
|
170 |
+
output_unpad = flash_attn_interface.flash_attn_unpadded_func(query_unpad, key_unpad, value_unpad, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, dropout_p, softmax_scale=softmax_scale, causal=reset_is_causal, return_attn_probs=needs_weights)
|
171 |
+
output = bert_padding.pad_input(rearrange(output_unpad, 'nnz h d -> nnz (h d)'), indices_q, batch_size, seqlen)
|
172 |
+
return AttnFnOutput(output, None)
|
173 |
+
|
174 |
+
def triton_flash_attn_fn(
|
175 |
+
query: torch.Tensor,
|
176 |
+
key: torch.Tensor,
|
177 |
+
value: torch.Tensor,
|
178 |
+
n_heads: int,
|
179 |
+
softmax_scale: Optional[float] = None,
|
180 |
+
attn_bias: Optional[torch.Tensor] = None,
|
181 |
+
key_padding_mask: Optional[torch.ByteTensor] = None,
|
182 |
+
is_causal = False,
|
183 |
+
dropout_p = 0.0,
|
184 |
+
training = False,
|
185 |
+
needs_weights = False,
|
186 |
+
multiquery = False,
|
187 |
+
) -> AttnFnOutput:
|
188 |
+
try:
|
189 |
+
from .flash_attn_triton import flash_attn_func
|
190 |
+
except:
|
191 |
+
_installed = False
|
192 |
+
if version.parse(torch.__version__) < version.parse('2.0.0'):
|
193 |
+
_installed = True
|
194 |
+
try:
|
195 |
+
from flash_attn.flash_attn_triton import flash_attn_func
|
196 |
+
except:
|
197 |
+
_installed = False
|
198 |
+
if not _installed:
|
199 |
+
raise RuntimeError('Requirements for `attn_impl: triton` not installed. Either (1) have a CUDA-compatible GPU and `pip install .[gpu]` if installing from llm-foundry source or `pip install triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir#subdirectory=python` if installing from pypi, or (2) use torch attn model.attn_config.attn_impl=torch (torch attn_impl will be slow). Note: (1) requires you have CMake and PyTorch already installed.')
|
200 |
+
check_valid_inputs(query, key, value)
|
201 |
+
if dropout_p:
|
202 |
+
raise NotImplementedError(f'Dropout not implemented for attn_impl: triton.')
|
203 |
+
if needs_weights:
|
204 |
+
raise NotImplementedError(f'attn_impl: triton cannot return attn weights.')
|
205 |
+
if key_padding_mask is not None:
|
206 |
+
warnings.warn('Propagating key_padding_mask to the attention module ' + 'and applying it within the attention module can cause ' + 'unnecessary computation/memory usage. Consider integrating ' + 'into attn_bias once and passing that to each attention ' + 'module instead.')
|
207 |
+
(b_size, s_k) = key_padding_mask.shape[:2]
|
208 |
+
if attn_bias is None:
|
209 |
+
attn_bias = query.new_zeros(b_size, 1, 1, s_k)
|
210 |
+
attn_bias = attn_bias.masked_fill(~key_padding_mask.view((b_size, 1, 1, s_k)), torch.finfo(query.dtype).min)
|
211 |
+
query = rearrange(query, 'b s (h d) -> b s h d', h=n_heads)
|
212 |
+
key = rearrange(key, 'b s (h d) -> b s h d', h=1 if multiquery else n_heads)
|
213 |
+
value = rearrange(value, 'b s (h d) -> b s h d', h=1 if multiquery else n_heads)
|
214 |
+
if multiquery:
|
215 |
+
key = key.expand(*key.shape[:2], n_heads, key.size(-1))
|
216 |
+
value = value.expand(*value.shape[:2], n_heads, value.size(-1))
|
217 |
+
reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal)
|
218 |
+
attn_output = flash_attn_func(query, key, value, attn_bias, reset_is_causal, softmax_scale)
|
219 |
+
output = attn_output.view(*attn_output.shape[:2], -1)
|
220 |
+
return AttnFnOutput(output, None)
|
221 |
+
|
222 |
+
class MultiheadAttention(nn.Module, Attn):
|
223 |
+
"""Multi-head self attention.
|
224 |
+
Using torch or triton attention implemetation enables user to also use
|
225 |
+
additive bias.
|
226 |
+
"""
|
227 |
+
gradient_checkpointing = False
|
228 |
+
attn_fn: AttnFn
|
229 |
+
|
230 |
+
def __init__(self, d_model: int, n_heads: int, attn_impl: str='triton', clip_qkv: Optional[float]=None, qk_ln: bool=False, softmax_scale: Optional[float]=None, attn_pdrop: float=0.0, low_precision_layernorm: bool=False, device: Optional[str]=None):
|
231 |
+
super().__init__()
|
232 |
+
self.attn_impl = attn_impl
|
233 |
+
self.clip_qkv = clip_qkv
|
234 |
+
self.qk_ln = qk_ln
|
235 |
+
self.d_model = d_model
|
236 |
+
self.n_heads = n_heads
|
237 |
+
self.softmax_scale = softmax_scale
|
238 |
+
if self.softmax_scale is None:
|
239 |
+
self.softmax_scale = 1 / math.sqrt(self.d_model / self.n_heads)
|
240 |
+
self.attn_dropout_p = attn_pdrop
|
241 |
+
self.Wqkv = nn.Linear(self.d_model, 3 * self.d_model, device=device)
|
242 |
+
fuse_splits = (d_model, 2 * d_model)
|
243 |
+
self.Wqkv._fused = (0, fuse_splits)
|
244 |
+
if self.qk_ln:
|
245 |
+
layernorm_class = LPLayerNorm if low_precision_layernorm else nn.LayerNorm
|
246 |
+
self.q_ln = layernorm_class(self.d_model, device=device)
|
247 |
+
self.k_ln = layernorm_class(self.d_model, device=device)
|
248 |
+
if self.attn_impl == 'flash':
|
249 |
+
self.attn_fn = flash_attn_fn
|
250 |
+
elif self.attn_impl == 'triton':
|
251 |
+
self.attn_fn = triton_flash_attn_fn
|
252 |
+
warnings.warn('While `attn_impl: triton` can be faster than `attn_impl: flash` ' + 'it uses more memory. When training larger models this can trigger ' + 'alloc retries which hurts performance. If encountered, we recommend ' + 'using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`.')
|
253 |
+
elif self.attn_impl == 'torch':
|
254 |
+
self.attn_fn = scaled_multihead_dot_product_attention
|
255 |
+
if torch.cuda.is_available():
|
256 |
+
warnings.warn('Using `attn_impl: torch`. If your model does not use `alibi` or ' + '`prefix_lm` we recommend using `attn_impl: flash` otherwise ' + 'we recommend using `attn_impl: triton`.')
|
257 |
+
else:
|
258 |
+
raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.')
|
259 |
+
self.out_proj = nn.Linear(self.d_model, self.d_model, device=device)
|
260 |
+
self.out_proj._is_residual = True
|
261 |
+
|
262 |
+
def forward(
|
263 |
+
self,
|
264 |
+
x: torch.Tensor,
|
265 |
+
past_key_value: Union[PastKeyValue, Tuple, None] = None,
|
266 |
+
attn_bias: Optional[torch.Tensor] = None,
|
267 |
+
attention_mask: Optional[torch.ByteTensor] = None,
|
268 |
+
is_causal = True,
|
269 |
+
needs_weights = False,
|
270 |
+
) -> AttnOutput:
|
271 |
+
qkv = self.Wqkv(x)
|
272 |
+
if self.clip_qkv:
|
273 |
+
qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv)
|
274 |
+
(query, key, value) = qkv.chunk(3, dim=2)
|
275 |
+
key_padding_mask = attention_mask
|
276 |
+
if self.qk_ln:
|
277 |
+
dtype = query.dtype
|
278 |
+
query = self.q_ln(query).to(dtype)
|
279 |
+
key = self.k_ln(key).to(dtype)
|
280 |
+
if past_key_value is not None:
|
281 |
+
if len(past_key_value) != 0:
|
282 |
+
key = torch.cat([past_key_value[0], key], dim=1)
|
283 |
+
value = torch.cat([past_key_value[1], value], dim=1)
|
284 |
+
past_key_value = PastKeyValue(key, value)
|
285 |
+
if attn_bias is not None:
|
286 |
+
attn_bias = attn_bias[:, :, -query.size(1):, -key.size(1):]
|
287 |
+
if self.training and self.gradient_checkpointing:
|
288 |
+
ckpt_kwargs: Dict[str, Any] = {'use_reentrant': False} if is_torch_version('>=', '1.11.0') else {}
|
289 |
+
def create_custom_forward(attn_fn: AttnFn) -> AttnFnCheckpointed:
|
290 |
+
def custom_forward(
|
291 |
+
query: torch.Tensor,
|
292 |
+
key: torch.Tensor,
|
293 |
+
value: torch.Tensor,
|
294 |
+
n_heads: int,
|
295 |
+
softmax_scale: Optional[float],
|
296 |
+
attn_bias: Optional[torch.Tensor],
|
297 |
+
key_padding_mask: Optional[torch.ByteTensor],
|
298 |
+
is_causal: bool,
|
299 |
+
dropout_p: float,
|
300 |
+
training: bool,
|
301 |
+
needs_weights: bool,
|
302 |
+
):
|
303 |
+
return attn_fn(
|
304 |
+
query,
|
305 |
+
key,
|
306 |
+
value,
|
307 |
+
n_heads,
|
308 |
+
softmax_scale,
|
309 |
+
attn_bias,
|
310 |
+
key_padding_mask,
|
311 |
+
is_causal,
|
312 |
+
dropout_p,
|
313 |
+
training,
|
314 |
+
needs_weights,
|
315 |
+
False, # multiquery
|
316 |
+
)
|
317 |
+
return custom_forward
|
318 |
+
attn_fn_out: AttnFnOutput = checkpoint(
|
319 |
+
create_custom_forward(self.attn_fn),
|
320 |
+
query,
|
321 |
+
key,
|
322 |
+
value,
|
323 |
+
self.n_heads,
|
324 |
+
self.softmax_scale,
|
325 |
+
attn_bias,
|
326 |
+
key_padding_mask,
|
327 |
+
is_causal,
|
328 |
+
self.attn_dropout_p,
|
329 |
+
self.training,
|
330 |
+
needs_weights,
|
331 |
+
**ckpt_kwargs,
|
332 |
+
)
|
333 |
+
else:
|
334 |
+
attn_fn_out: AttnFnOutput = self.attn_fn(
|
335 |
+
query,
|
336 |
+
key,
|
337 |
+
value,
|
338 |
+
self.n_heads,
|
339 |
+
softmax_scale=self.softmax_scale,
|
340 |
+
attn_bias=attn_bias,
|
341 |
+
key_padding_mask=key_padding_mask,
|
342 |
+
is_causal=is_causal,
|
343 |
+
dropout_p=self.attn_dropout_p,
|
344 |
+
training=self.training,
|
345 |
+
needs_weights=needs_weights,
|
346 |
+
)
|
347 |
+
context, attn_weights = attn_fn_out
|
348 |
+
return AttnOutput(self.out_proj(context), attn_weights, past_key_value)
|
349 |
+
|
350 |
+
class MultiQueryAttention(nn.Module, Attn):
|
351 |
+
"""Multi-Query self attention.
|
352 |
+
Using torch or triton attention implemetation enables user to also use
|
353 |
+
additive bias.
|
354 |
+
"""
|
355 |
+
|
356 |
+
def __init__(self, d_model: int, n_heads: int, attn_impl: str='triton', clip_qkv: Optional[float]=None, qk_ln: bool=False, softmax_scale: Optional[float]=None, attn_pdrop: float=0.0, low_precision_layernorm: bool=False, device: Optional[str]=None):
|
357 |
+
super().__init__()
|
358 |
+
self.attn_impl = attn_impl
|
359 |
+
self.clip_qkv = clip_qkv
|
360 |
+
self.qk_ln = qk_ln
|
361 |
+
self.d_model = d_model
|
362 |
+
self.n_heads = n_heads
|
363 |
+
self.head_dim = d_model // n_heads
|
364 |
+
self.softmax_scale = softmax_scale
|
365 |
+
if self.softmax_scale is None:
|
366 |
+
self.softmax_scale = 1 / math.sqrt(self.head_dim)
|
367 |
+
self.attn_dropout_p = attn_pdrop
|
368 |
+
self.Wqkv = nn.Linear(d_model, d_model + 2 * self.head_dim, device=device)
|
369 |
+
fuse_splits = (d_model, d_model + self.head_dim)
|
370 |
+
self.Wqkv._fused = (0, fuse_splits)
|
371 |
+
if self.qk_ln:
|
372 |
+
layernorm_class = LPLayerNorm if low_precision_layernorm else nn.LayerNorm
|
373 |
+
self.q_ln = layernorm_class(d_model, device=device)
|
374 |
+
self.k_ln = layernorm_class(self.head_dim, device=device)
|
375 |
+
if self.attn_impl == 'flash':
|
376 |
+
self.attn_fn = flash_attn_fn
|
377 |
+
elif self.attn_impl == 'triton':
|
378 |
+
self.attn_fn = triton_flash_attn_fn
|
379 |
+
warnings.warn('While `attn_impl: triton` can be faster than `attn_impl: flash` ' + 'it uses more memory. When training larger models this can trigger ' + 'alloc retries which hurts performance. If encountered, we recommend ' + 'using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`.')
|
380 |
+
elif self.attn_impl == 'torch':
|
381 |
+
self.attn_fn = scaled_multihead_dot_product_attention
|
382 |
+
if torch.cuda.is_available():
|
383 |
+
warnings.warn('Using `attn_impl: torch`. If your model does not use `alibi` or ' + '`prefix_lm` we recommend using `attn_impl: flash` otherwise ' + 'we recommend using `attn_impl: triton`.')
|
384 |
+
else:
|
385 |
+
raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.')
|
386 |
+
self.out_proj = nn.Linear(self.d_model, self.d_model, device=device)
|
387 |
+
self.out_proj._is_residual = True
|
388 |
+
|
389 |
+
def forward(
|
390 |
+
self,
|
391 |
+
x: torch.Tensor,
|
392 |
+
past_key_value: Union[PastKeyValue, Tuple, None] = None,
|
393 |
+
attn_bias: Optional[torch.Tensor] = None,
|
394 |
+
attention_mask: Optional[torch.ByteTensor] = None,
|
395 |
+
is_causal = True,
|
396 |
+
needs_weights = False,
|
397 |
+
) -> AttnOutput:
|
398 |
+
qkv = self.Wqkv(x)
|
399 |
+
if self.clip_qkv:
|
400 |
+
qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv)
|
401 |
+
(query, key, value) = qkv.split([self.d_model, self.head_dim, self.head_dim], dim=2)
|
402 |
+
key_padding_mask = attention_mask
|
403 |
+
if self.qk_ln:
|
404 |
+
dtype = query.dtype
|
405 |
+
query = self.q_ln(query).to(dtype)
|
406 |
+
key = self.k_ln(key).to(dtype)
|
407 |
+
if past_key_value is not None:
|
408 |
+
if len(past_key_value) != 0:
|
409 |
+
key = torch.cat([past_key_value[0], key], dim=1)
|
410 |
+
value = torch.cat([past_key_value[1], value], dim=1)
|
411 |
+
past_key_value = PastKeyValue(key, value)
|
412 |
+
if attn_bias is not None:
|
413 |
+
attn_bias = attn_bias[:, :, -query.size(1):, -key.size(1):]
|
414 |
+
if self.training and self.gradient_checkpointing:
|
415 |
+
ckpt_kwargs: Dict[str, Any] = {'use_reentrant': False} if is_torch_version('>=', '1.11.0') else {}
|
416 |
+
def create_custom_forward(attn_fn: AttnFn) -> AttnFnCheckpointed:
|
417 |
+
def custom_forward(
|
418 |
+
query: torch.Tensor,
|
419 |
+
key: torch.Tensor,
|
420 |
+
value: torch.Tensor,
|
421 |
+
n_heads: int,
|
422 |
+
softmax_scale: Optional[float],
|
423 |
+
attn_bias: Optional[torch.Tensor],
|
424 |
+
key_padding_mask: Optional[torch.ByteTensor],
|
425 |
+
is_causal: bool,
|
426 |
+
dropout_p: float,
|
427 |
+
training: bool,
|
428 |
+
needs_weights: bool,
|
429 |
+
):
|
430 |
+
return attn_fn(
|
431 |
+
query,
|
432 |
+
key,
|
433 |
+
value,
|
434 |
+
n_heads,
|
435 |
+
softmax_scale,
|
436 |
+
attn_bias,
|
437 |
+
key_padding_mask,
|
438 |
+
is_causal,
|
439 |
+
dropout_p,
|
440 |
+
training,
|
441 |
+
needs_weights,
|
442 |
+
True, # multiquery
|
443 |
+
)
|
444 |
+
return custom_forward
|
445 |
+
attn_fn_out: AttnFnOutput = checkpoint(
|
446 |
+
create_custom_forward(self.attn_fn),
|
447 |
+
query,
|
448 |
+
key,
|
449 |
+
value,
|
450 |
+
self.n_heads,
|
451 |
+
self.softmax_scale,
|
452 |
+
attn_bias,
|
453 |
+
key_padding_mask,
|
454 |
+
is_causal,
|
455 |
+
self.attn_dropout_p,
|
456 |
+
self.training,
|
457 |
+
needs_weights,
|
458 |
+
**ckpt_kwargs,
|
459 |
+
)
|
460 |
+
else:
|
461 |
+
attn_fn_out: AttnFnOutput = self.attn_fn(
|
462 |
+
query,
|
463 |
+
key,
|
464 |
+
value,
|
465 |
+
self.n_heads,
|
466 |
+
softmax_scale=self.softmax_scale,
|
467 |
+
attn_bias=attn_bias,
|
468 |
+
key_padding_mask=key_padding_mask,
|
469 |
+
is_causal=is_causal,
|
470 |
+
dropout_p=self.attn_dropout_p,
|
471 |
+
training=self.training,
|
472 |
+
needs_weights=needs_weights,
|
473 |
+
)
|
474 |
+
context, attn_weights = attn_fn_out
|
475 |
+
return AttnOutput(self.out_proj(context), attn_weights, past_key_value)
|
476 |
+
|
477 |
+
def attn_bias_shape(attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id):
|
478 |
+
if attn_impl == 'flash':
|
479 |
+
return None
|
480 |
+
elif attn_impl in ['torch', 'triton']:
|
481 |
+
if alibi:
|
482 |
+
if (prefix_lm or not causal) or use_sequence_id:
|
483 |
+
return (1, n_heads, seq_len, seq_len)
|
484 |
+
return (1, n_heads, 1, seq_len)
|
485 |
+
elif prefix_lm or use_sequence_id:
|
486 |
+
return (1, 1, seq_len, seq_len)
|
487 |
+
return None
|
488 |
+
else:
|
489 |
+
raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.')
|
490 |
+
|
491 |
+
def build_attn_bias(attn_impl, attn_bias, n_heads, seq_len, causal=False, alibi=False, alibi_bias_max=8):
|
492 |
+
if attn_impl == 'flash':
|
493 |
+
return None
|
494 |
+
elif attn_impl in ['torch', 'triton']:
|
495 |
+
if alibi:
|
496 |
+
(device, dtype) = (attn_bias.device, attn_bias.dtype)
|
497 |
+
attn_bias = attn_bias.add(build_alibi_bias(n_heads, seq_len, full=not causal, alibi_bias_max=alibi_bias_max, device=device, dtype=dtype))
|
498 |
+
return attn_bias
|
499 |
+
else:
|
500 |
+
raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.')
|
501 |
+
|
502 |
+
def gen_slopes(n_heads, alibi_bias_max=8, device=None):
|
503 |
+
_n_heads = 2 ** math.ceil(math.log2(n_heads))
|
504 |
+
m = torch.arange(1, _n_heads + 1, dtype=torch.float32, device=device)
|
505 |
+
m = m.mul(alibi_bias_max / _n_heads)
|
506 |
+
slopes = 1.0 / torch.pow(2, m)
|
507 |
+
if _n_heads != n_heads:
|
508 |
+
slopes = torch.concat([slopes[1::2], slopes[::2]])[:n_heads]
|
509 |
+
return slopes.view(1, n_heads, 1, 1)
|
510 |
+
|
511 |
+
def build_alibi_bias(n_heads, seq_len, full=False, alibi_bias_max=8, device=None, dtype=None):
|
512 |
+
alibi_bias = torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view(1, 1, 1, seq_len)
|
513 |
+
if full:
|
514 |
+
alibi_bias = alibi_bias - torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view(1, 1, seq_len, 1)
|
515 |
+
alibi_bias = alibi_bias.abs().mul(-1)
|
516 |
+
slopes = gen_slopes(n_heads, alibi_bias_max, device=device)
|
517 |
+
alibi_bias = alibi_bias * slopes
|
518 |
+
return alibi_bias.to(dtype=dtype)
|
519 |
+
ATTN_CLASS_REGISTRY = {'multihead_attention': MultiheadAttention, 'multiquery_attention': MultiQueryAttention}
|
blocks.py
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""GPT Blocks used for the GPT Model."""
|
2 |
+
from typing import Dict, Optional, Tuple, NamedTuple, Union
|
3 |
+
import torch
|
4 |
+
import torch.nn as nn
|
5 |
+
from .attention import ATTN_CLASS_REGISTRY, Attn, PastKeyValue
|
6 |
+
from .norm import NORM_CLASS_REGISTRY
|
7 |
+
|
8 |
+
class MPTBlockOutput(NamedTuple):
|
9 |
+
hidden_states: torch.Tensor
|
10 |
+
past_key_value: Union[PastKeyValue, Tuple, None]
|
11 |
+
|
12 |
+
class MPTMLP(nn.Module):
|
13 |
+
|
14 |
+
def __init__(self, d_model: int, expansion_ratio: int, device: Optional[str]=None):
|
15 |
+
super().__init__()
|
16 |
+
self.up_proj = nn.Linear(d_model, expansion_ratio * d_model, device=device)
|
17 |
+
self.act = nn.GELU(approximate='none')
|
18 |
+
self.down_proj = nn.Linear(expansion_ratio * d_model, d_model, device=device)
|
19 |
+
self.down_proj._is_residual = True
|
20 |
+
|
21 |
+
def forward(self, x):
|
22 |
+
return self.down_proj(self.act(self.up_proj(x)))
|
23 |
+
|
24 |
+
class MPTBlock(nn.Module):
|
25 |
+
attn: Attn
|
26 |
+
|
27 |
+
def __init__(self, d_model: int, n_heads: int, expansion_ratio: int, attn_config: Dict={'attn_type': 'multihead_attention', 'attn_pdrop': 0.0, 'attn_impl': 'triton', 'qk_ln': False, 'clip_qkv': None, 'softmax_scale': None, 'prefix_lm': False, 'attn_uses_sequence_id': False, 'alibi': False, 'alibi_bias_max': 8}, resid_pdrop: float=0.0, norm_type: str='low_precision_layernorm', device: Optional[str]=None, **kwargs):
|
28 |
+
del kwargs
|
29 |
+
super().__init__()
|
30 |
+
norm_class = NORM_CLASS_REGISTRY[norm_type.lower()]
|
31 |
+
attn_class = ATTN_CLASS_REGISTRY[attn_config['attn_type']]
|
32 |
+
self.norm_1 = norm_class(d_model, device=device)
|
33 |
+
self.attn = attn_class(attn_impl=attn_config['attn_impl'], clip_qkv=attn_config['clip_qkv'], qk_ln=attn_config['qk_ln'], softmax_scale=attn_config['softmax_scale'], attn_pdrop=attn_config['attn_pdrop'], d_model=d_model, n_heads=n_heads, device=device)
|
34 |
+
self.norm_2 = norm_class(d_model, device=device)
|
35 |
+
self.ffn = MPTMLP(d_model=d_model, expansion_ratio=expansion_ratio, device=device)
|
36 |
+
self.resid_attn_dropout = nn.Dropout(resid_pdrop)
|
37 |
+
self.resid_ffn_dropout = nn.Dropout(resid_pdrop)
|
38 |
+
|
39 |
+
def forward(self, x: torch.Tensor, past_key_value: Union[PastKeyValue, Tuple, None] = None, attn_bias: Optional[torch.Tensor]=None, attention_mask: Optional[torch.ByteTensor]=None, is_causal: bool=True) -> MPTBlockOutput:
|
40 |
+
a = self.norm_1(x)
|
41 |
+
(b, _, past_key_value) = self.attn(a, past_key_value=past_key_value, attn_bias=attn_bias, attention_mask=attention_mask, is_causal=is_causal)
|
42 |
+
x = x + self.resid_attn_dropout(b)
|
43 |
+
m = self.norm_2(x)
|
44 |
+
n = self.ffn(m)
|
45 |
+
x = x + self.resid_ffn_dropout(n)
|
46 |
+
return MPTBlockOutput(x, past_key_value)
|
config.json
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"MPTForCausalLM"
|
4 |
+
],
|
5 |
+
"attn_config": {
|
6 |
+
"alibi": true,
|
7 |
+
"alibi_bias_max": 16,
|
8 |
+
"attn_impl": "torch",
|
9 |
+
"attn_pdrop": 0,
|
10 |
+
"attn_type": "multihead_attention",
|
11 |
+
"attn_uses_sequence_id": false,
|
12 |
+
"clip_qkv": 6,
|
13 |
+
"prefix_lm": false,
|
14 |
+
"qk_ln": false,
|
15 |
+
"softmax_scale": null
|
16 |
+
},
|
17 |
+
"auto_map": {
|
18 |
+
"AutoConfig": "configuration_mpt.MPTConfig",
|
19 |
+
"AutoModelForCausalLM": "modeling_mpt.MPTForCausalLM"
|
20 |
+
},
|
21 |
+
"d_model": 4096,
|
22 |
+
"emb_pdrop": 0,
|
23 |
+
"embedding_fraction": 1.0,
|
24 |
+
"expansion_ratio": 4,
|
25 |
+
"init_config": {
|
26 |
+
"emb_init_std": null,
|
27 |
+
"emb_init_uniform_lim": null,
|
28 |
+
"fan_mode": "fan_in",
|
29 |
+
"init_div_is_residual": true,
|
30 |
+
"init_gain": 0,
|
31 |
+
"init_nonlinearity": "relu",
|
32 |
+
"init_std": 0.02,
|
33 |
+
"name": "kaiming_normal_",
|
34 |
+
"verbose": 0
|
35 |
+
},
|
36 |
+
"init_device": "cpu",
|
37 |
+
"learned_pos_emb": true,
|
38 |
+
"logit_scale": null,
|
39 |
+
"max_seq_len": 65536,
|
40 |
+
"model_type": "mpt",
|
41 |
+
"n_heads": 32,
|
42 |
+
"n_layers": 32,
|
43 |
+
"no_bias": true,
|
44 |
+
"norm_type": "low_precision_layernorm",
|
45 |
+
"resid_pdrop": 0,
|
46 |
+
"tokenizer_name": "EleutherAI/gpt-neox-20b",
|
47 |
+
"torch_dtype": "bfloat16",
|
48 |
+
"transformers_version": "4.28.1",
|
49 |
+
"use_cache": false,
|
50 |
+
"verbose": 0,
|
51 |
+
"vocab_size": 50432
|
52 |
+
}
|
configuration_mpt.py
ADDED
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""A HuggingFace-style model configuration."""
|
2 |
+
from typing import Dict, Optional, Union
|
3 |
+
from transformers import PretrainedConfig
|
4 |
+
attn_config_defaults: Dict = {'attn_type': 'multihead_attention', 'attn_pdrop': 0.0, 'attn_impl': 'triton', 'qk_ln': False, 'clip_qkv': None, 'softmax_scale': None, 'prefix_lm': False, 'attn_uses_sequence_id': False, 'alibi': False, 'alibi_bias_max': 8}
|
5 |
+
init_config_defaults: Dict = {'name': 'kaiming_normal_', 'fan_mode': 'fan_in', 'init_nonlinearity': 'relu'}
|
6 |
+
|
7 |
+
class MPTConfig(PretrainedConfig):
|
8 |
+
model_type = 'mpt'
|
9 |
+
|
10 |
+
def __init__(self, d_model: int=2048, n_heads: int=16, n_layers: int=24, expansion_ratio: int=4, max_seq_len: int=2048, vocab_size: int=50368, resid_pdrop: float=0.0, emb_pdrop: float=0.0, learned_pos_emb: bool=True, attn_config: Dict=attn_config_defaults, init_device: str='cpu', logit_scale: Optional[Union[float, str]]=None, no_bias: bool=False, verbose: int=0, embedding_fraction: float=1.0, norm_type: str='low_precision_layernorm', use_cache: bool=False, init_config: Dict=init_config_defaults, **kwargs):
|
11 |
+
"""The MPT configuration class.
|
12 |
+
|
13 |
+
Args:
|
14 |
+
d_model (int): The size of the embedding dimension of the model.
|
15 |
+
n_heads (int): The number of attention heads.
|
16 |
+
n_layers (int): The number of layers in the model.
|
17 |
+
expansion_ratio (int): The ratio of the up/down scale in the MLP.
|
18 |
+
max_seq_len (int): The maximum sequence length of the model.
|
19 |
+
vocab_size (int): The size of the vocabulary.
|
20 |
+
resid_pdrop (float): The dropout probability applied to the attention output before combining with residual.
|
21 |
+
emb_pdrop (float): The dropout probability for the embedding layer.
|
22 |
+
learned_pos_emb (bool): Whether to use learned positional embeddings
|
23 |
+
attn_config (Dict): A dictionary used to configure the model's attention module:
|
24 |
+
attn_type (str): type of attention to use. Options: multihead_attention, multiquery_attention
|
25 |
+
attn_pdrop (float): The dropout probability for the attention layers.
|
26 |
+
attn_impl (str): The attention implementation to use. One of 'torch', 'flash', or 'triton'.
|
27 |
+
qk_ln (bool): Whether to apply layer normalization to the queries and keys in the attention layer.
|
28 |
+
clip_qkv (Optional[float]): If not None, clip the queries, keys, and values in the attention layer to
|
29 |
+
this value.
|
30 |
+
softmax_scale (Optional[float]): If not None, scale the softmax in the attention layer by this value. If None,
|
31 |
+
use the default scale of ``1/sqrt(d_keys)``.
|
32 |
+
prefix_lm (Optional[bool]): Whether the model should operate as a Prefix LM. This requires passing an
|
33 |
+
extra `prefix_mask` argument which indicates which tokens belong to the prefix. Tokens in the prefix
|
34 |
+
can attend to one another bi-directionally. Tokens outside the prefix use causal attention.
|
35 |
+
attn_uses_sequence_id (Optional[bool]): Whether to restrict attention to tokens that have the same sequence_id.
|
36 |
+
When the model is in `train` mode, this requires passing an extra `sequence_id` argument which indicates
|
37 |
+
which sub-sequence each token belongs to.
|
38 |
+
Defaults to ``False`` meaning any provided `sequence_id` will be ignored.
|
39 |
+
alibi (bool): Whether to use the alibi bias instead of position embeddings.
|
40 |
+
alibi_bias_max (int): The maximum value of the alibi bias.
|
41 |
+
init_device (str): The device to use for parameter initialization.
|
42 |
+
logit_scale (Optional[Union[float, str]]): If not None, scale the logits by this value.
|
43 |
+
no_bias (bool): Whether to use bias in all layers.
|
44 |
+
verbose (int): The verbosity level. 0 is silent.
|
45 |
+
embedding_fraction (float): The fraction to scale the gradients of the embedding layer by.
|
46 |
+
norm_type (str): choose type of norm to use
|
47 |
+
multiquery_attention (bool): Whether to use multiquery attention implementation.
|
48 |
+
use_cache (bool): Whether or not the model should return the last key/values attentions
|
49 |
+
init_config (Dict): A dictionary used to configure the model initialization:
|
50 |
+
init_config.name: The parameter initialization scheme to use. Options: 'default_', 'baseline_',
|
51 |
+
'kaiming_uniform_', 'kaiming_normal_', 'neox_init_', 'small_init_', 'xavier_uniform_', or
|
52 |
+
'xavier_normal_'. These mimic the parameter initialization methods in PyTorch.
|
53 |
+
init_div_is_residual (Union[int, float, str, bool]): Value to divide initial weights by if ``module._is_residual`` is True.
|
54 |
+
emb_init_std (Optional[float]): The standard deviation of the normal distribution used to initialize the embedding layer.
|
55 |
+
emb_init_uniform_lim (Optional[Union[Tuple[float, float], float]]): The lower and upper limits of the uniform distribution
|
56 |
+
used to initialize the embedding layer. Mutually exclusive with ``emb_init_std``.
|
57 |
+
init_std (float): The standard deviation of the normal distribution used to initialize the model,
|
58 |
+
if using the baseline_ parameter initialization scheme.
|
59 |
+
init_gain (float): The gain to use for parameter initialization with kaiming or xavier initialization schemes.
|
60 |
+
fan_mode (str): The fan mode to use for parameter initialization with kaiming initialization schemes.
|
61 |
+
init_nonlinearity (str): The nonlinearity to use for parameter initialization with kaiming initialization schemes.
|
62 |
+
---
|
63 |
+
See llmfoundry.models.utils.param_init_fns.py for info on other param init config options
|
64 |
+
"""
|
65 |
+
self.d_model = d_model
|
66 |
+
self.n_heads = n_heads
|
67 |
+
self.n_layers = n_layers
|
68 |
+
self.expansion_ratio = expansion_ratio
|
69 |
+
self.max_seq_len = max_seq_len
|
70 |
+
self.vocab_size = vocab_size
|
71 |
+
self.resid_pdrop = resid_pdrop
|
72 |
+
self.emb_pdrop = emb_pdrop
|
73 |
+
self.learned_pos_emb = learned_pos_emb
|
74 |
+
self.attn_config = attn_config
|
75 |
+
self.init_device = init_device
|
76 |
+
self.logit_scale = logit_scale
|
77 |
+
self.no_bias = no_bias
|
78 |
+
self.verbose = verbose
|
79 |
+
self.embedding_fraction = embedding_fraction
|
80 |
+
self.norm_type = norm_type
|
81 |
+
self.use_cache = use_cache
|
82 |
+
self.init_config = init_config
|
83 |
+
if 'name' in kwargs:
|
84 |
+
del kwargs['name']
|
85 |
+
if 'loss_fn' in kwargs:
|
86 |
+
del kwargs['loss_fn']
|
87 |
+
super().__init__(**kwargs)
|
88 |
+
self._validate_config()
|
89 |
+
|
90 |
+
def _set_config_defaults(self, config, config_defaults):
|
91 |
+
for (k, v) in config_defaults.items():
|
92 |
+
if k not in config:
|
93 |
+
config[k] = v
|
94 |
+
return config
|
95 |
+
|
96 |
+
def _validate_config(self):
|
97 |
+
self.attn_config = self._set_config_defaults(self.attn_config, attn_config_defaults)
|
98 |
+
self.init_config = self._set_config_defaults(self.init_config, init_config_defaults)
|
99 |
+
if self.d_model % self.n_heads != 0:
|
100 |
+
raise ValueError('d_model must be divisible by n_heads')
|
101 |
+
if any((prob < 0 or prob > 1 for prob in [self.attn_config['attn_pdrop'], self.resid_pdrop, self.emb_pdrop])):
|
102 |
+
raise ValueError("self.attn_config['attn_pdrop'], resid_pdrop, emb_pdrop are probabilities and must be between 0 and 1")
|
103 |
+
if self.attn_config['attn_impl'] not in ['torch', 'flash', 'triton']:
|
104 |
+
raise ValueError(f"Unknown attn_impl={self.attn_config['attn_impl']}")
|
105 |
+
if self.attn_config['prefix_lm'] and self.attn_config['attn_impl'] not in ['torch', 'triton']:
|
106 |
+
raise NotImplementedError('prefix_lm only implemented with torch and triton attention.')
|
107 |
+
if self.attn_config['alibi'] and self.attn_config['attn_impl'] not in ['torch', 'triton']:
|
108 |
+
raise NotImplementedError('alibi only implemented with torch and triton attention.')
|
109 |
+
if self.attn_config['attn_uses_sequence_id'] and self.attn_config['attn_impl'] not in ['torch', 'triton']:
|
110 |
+
raise NotImplementedError('attn_uses_sequence_id only implemented with torch and triton attention.')
|
111 |
+
if self.embedding_fraction > 1 or self.embedding_fraction <= 0:
|
112 |
+
raise ValueError('model.embedding_fraction must be between 0 (exclusive) and 1 (inclusive)!')
|
113 |
+
if isinstance(self.logit_scale, str) and self.logit_scale != 'inv_sqrt_d_model':
|
114 |
+
raise ValueError(f"self.logit_scale={self.logit_scale!r} is not recognized as an option; use numeric value or 'inv_sqrt_d_model'.")
|
115 |
+
if self.init_config.get('name', None) is None:
|
116 |
+
raise ValueError(f"self.init_config={self.init_config!r} 'name' needs to be set.")
|
117 |
+
if not self.learned_pos_emb and (not self.attn_config['alibi']):
|
118 |
+
raise ValueError(f'Positional information must be provided to the model using either learned_pos_emb or alibi.')
|
flash_attn_triton.py
ADDED
@@ -0,0 +1,479 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Copied from https://github.com/HazyResearch/flash-attention/blob/eff9fe6b8076df59d64d7a3f464696738a3c7c24/flash_attn/flash_attn_triton.py
|
3 |
+
update imports to use 'triton_pre_mlir'
|
4 |
+
*Experimental* implementation of FlashAttention in Triton.
|
5 |
+
Tested with triton==2.0.0.dev20221202.
|
6 |
+
Triton 2.0 has a new backend (MLIR) but seems like it doesn't yet work for head dimensions
|
7 |
+
other than 64:
|
8 |
+
https://github.com/openai/triton/blob/d376020f90002757eea3ea9475d4f7cfc2ec5ead/python/triton/ops/flash_attention.py#L207
|
9 |
+
We'll update this implementation with the new Triton backend once this is fixed.
|
10 |
+
We use the FlashAttention implementation from Phil Tillet a starting point.
|
11 |
+
https://github.com/openai/triton/blob/master/python/tutorials/06-fused-attention.py
|
12 |
+
Changes:
|
13 |
+
- Implement both causal and non-causal attention.
|
14 |
+
- Implement both self-attention and cross-attention.
|
15 |
+
- Support arbitrary seqlens (not just multiples of 128), for both forward and backward.
|
16 |
+
- Support all head dimensions up to 128 (not just 16, 32, 64, 128), for both forward and backward.
|
17 |
+
- Support attention bias.
|
18 |
+
- Speed up the forward pass a bit, and only store the LSE instead of m and l.
|
19 |
+
- Make the backward for d=128 much faster by reducing register spilling.
|
20 |
+
- Optionally parallelize the backward pass across seqlen_k, to deal with the case of
|
21 |
+
small batch size * nheads.
|
22 |
+
Caution:
|
23 |
+
- This is an *experimental* implementation. The forward pass should be quite robust but
|
24 |
+
I'm not 100% sure that the backward pass doesn't have race conditions (due to the Triton compiler).
|
25 |
+
- This implementation has only been tested on A100.
|
26 |
+
- If you plan to use headdim other than 64 and 128, you should test for race conditions
|
27 |
+
(due to the Triton compiler), as done in tests/test_flash_attn.py
|
28 |
+
"test_flash_attn_triton_race_condition". I've tested and fixed many race conditions
|
29 |
+
for different head dimensions (40, 48, 64, 128, 80, 88, 96), but I'm still not 100% confident
|
30 |
+
that there are none left for other head dimensions.
|
31 |
+
Differences between this Triton version and the CUDA version:
|
32 |
+
- Triton version doesn't support dropout.
|
33 |
+
- Triton forward is generally faster than CUDA forward, while Triton backward is
|
34 |
+
generally slower than CUDA backward. Overall Triton forward + backward is slightly slower
|
35 |
+
than CUDA forward + backward.
|
36 |
+
- Triton version doesn't support different sequence lengths in a batch (i.e., RaggedTensor/NestedTensor).
|
37 |
+
- Triton version supports attention bias, while CUDA version doesn't.
|
38 |
+
"""
|
39 |
+
import math
|
40 |
+
import torch
|
41 |
+
import triton_pre_mlir as triton
|
42 |
+
import triton_pre_mlir.language as tl
|
43 |
+
|
44 |
+
@triton.heuristics({'EVEN_M': lambda args: args['seqlen_q'] % args['BLOCK_M'] == 0, 'EVEN_N': lambda args: args['seqlen_k'] % args['BLOCK_N'] == 0, 'EVEN_HEADDIM': lambda args: args['headdim'] == args['BLOCK_HEADDIM']})
|
45 |
+
@triton.jit
|
46 |
+
def _fwd_kernel(Q, K, V, Bias, Out, Lse, TMP, softmax_scale, stride_qb, stride_qh, stride_qm, stride_kb, stride_kh, stride_kn, stride_vb, stride_vh, stride_vn, stride_bb, stride_bh, stride_bm, stride_ob, stride_oh, stride_om, nheads, seqlen_q, seqlen_k, seqlen_q_rounded, headdim, CACHE_KEY_SEQLEN_Q, CACHE_KEY_SEQLEN_K, BIAS_TYPE: tl.constexpr, IS_CAUSAL: tl.constexpr, BLOCK_HEADDIM: tl.constexpr, EVEN_M: tl.constexpr, EVEN_N: tl.constexpr, EVEN_HEADDIM: tl.constexpr, BLOCK_M: tl.constexpr, BLOCK_N: tl.constexpr):
|
47 |
+
start_m = tl.program_id(0)
|
48 |
+
off_hb = tl.program_id(1)
|
49 |
+
off_b = off_hb // nheads
|
50 |
+
off_h = off_hb % nheads
|
51 |
+
offs_m = start_m * BLOCK_M + tl.arange(0, BLOCK_M)
|
52 |
+
offs_n = tl.arange(0, BLOCK_N)
|
53 |
+
offs_d = tl.arange(0, BLOCK_HEADDIM)
|
54 |
+
q_ptrs = Q + off_b * stride_qb + off_h * stride_qh + (offs_m[:, None] * stride_qm + offs_d[None, :])
|
55 |
+
k_ptrs = K + off_b * stride_kb + off_h * stride_kh + (offs_n[:, None] * stride_kn + offs_d[None, :])
|
56 |
+
v_ptrs = V + off_b * stride_vb + off_h * stride_vh + (offs_n[:, None] * stride_vn + offs_d[None, :])
|
57 |
+
if BIAS_TYPE == 'vector':
|
58 |
+
b_ptrs = Bias + off_b * stride_bb + off_h * stride_bh + offs_n
|
59 |
+
elif BIAS_TYPE == 'matrix':
|
60 |
+
b_ptrs = Bias + off_b * stride_bb + off_h * stride_bh + (offs_m[:, None] * stride_bm + offs_n[None, :])
|
61 |
+
t_ptrs = TMP + off_hb * seqlen_q_rounded + offs_m
|
62 |
+
lse_i = tl.zeros([BLOCK_M], dtype=tl.float32) - float('inf')
|
63 |
+
m_i = tl.zeros([BLOCK_M], dtype=tl.float32) - float('inf')
|
64 |
+
acc_o = tl.zeros([BLOCK_M, BLOCK_HEADDIM], dtype=tl.float32)
|
65 |
+
if EVEN_M & EVEN_N:
|
66 |
+
if EVEN_HEADDIM:
|
67 |
+
q = tl.load(q_ptrs)
|
68 |
+
else:
|
69 |
+
q = tl.load(q_ptrs, mask=offs_d[None, :] < headdim, other=0.0)
|
70 |
+
elif EVEN_HEADDIM:
|
71 |
+
q = tl.load(q_ptrs, mask=offs_m[:, None] < seqlen_q, other=0.0)
|
72 |
+
else:
|
73 |
+
q = tl.load(q_ptrs, mask=(offs_m[:, None] < seqlen_q) & (offs_d[None, :] < headdim), other=0.0)
|
74 |
+
end_n = seqlen_k if not IS_CAUSAL else tl.minimum((start_m + 1) * BLOCK_M, seqlen_k)
|
75 |
+
for start_n in range(0, end_n, BLOCK_N):
|
76 |
+
start_n = tl.multiple_of(start_n, BLOCK_N)
|
77 |
+
if EVEN_N & EVEN_M:
|
78 |
+
if EVEN_HEADDIM:
|
79 |
+
k = tl.load(k_ptrs + start_n * stride_kn)
|
80 |
+
else:
|
81 |
+
k = tl.load(k_ptrs + start_n * stride_kn, mask=offs_d[None, :] < headdim, other=0.0)
|
82 |
+
elif EVEN_HEADDIM:
|
83 |
+
k = tl.load(k_ptrs + start_n * stride_kn, mask=(start_n + offs_n)[:, None] < seqlen_k, other=0.0)
|
84 |
+
else:
|
85 |
+
k = tl.load(k_ptrs + start_n * stride_kn, mask=((start_n + offs_n)[:, None] < seqlen_k) & (offs_d[None, :] < headdim), other=0.0)
|
86 |
+
qk = tl.zeros([BLOCK_M, BLOCK_N], dtype=tl.float32)
|
87 |
+
qk += tl.dot(q, k, trans_b=True)
|
88 |
+
if not EVEN_N:
|
89 |
+
qk += tl.where((start_n + offs_n)[None, :] < seqlen_k, 0, float('-inf'))
|
90 |
+
if IS_CAUSAL:
|
91 |
+
qk += tl.where(offs_m[:, None] >= (start_n + offs_n)[None, :], 0, float('-inf'))
|
92 |
+
if BIAS_TYPE != 'none':
|
93 |
+
if BIAS_TYPE == 'vector':
|
94 |
+
if EVEN_N:
|
95 |
+
bias = tl.load(b_ptrs + start_n).to(tl.float32)
|
96 |
+
else:
|
97 |
+
bias = tl.load(b_ptrs + start_n, mask=start_n + offs_n < seqlen_k, other=0.0).to(tl.float32)
|
98 |
+
bias = bias[None, :]
|
99 |
+
elif BIAS_TYPE == 'matrix':
|
100 |
+
if EVEN_M & EVEN_N:
|
101 |
+
bias = tl.load(b_ptrs + start_n).to(tl.float32)
|
102 |
+
else:
|
103 |
+
bias = tl.load(b_ptrs + start_n, mask=(offs_m[:, None] < seqlen_q) & ((start_n + offs_n)[None, :] < seqlen_k), other=0.0).to(tl.float32)
|
104 |
+
qk = qk * softmax_scale + bias
|
105 |
+
m_ij = tl.maximum(tl.max(qk, 1), lse_i)
|
106 |
+
p = tl.exp(qk - m_ij[:, None])
|
107 |
+
else:
|
108 |
+
m_ij = tl.maximum(tl.max(qk, 1) * softmax_scale, lse_i)
|
109 |
+
p = tl.exp(qk * softmax_scale - m_ij[:, None])
|
110 |
+
l_ij = tl.sum(p, 1)
|
111 |
+
acc_o_scale = tl.exp(m_i - m_ij)
|
112 |
+
tl.store(t_ptrs, acc_o_scale)
|
113 |
+
acc_o_scale = tl.load(t_ptrs)
|
114 |
+
acc_o = acc_o * acc_o_scale[:, None]
|
115 |
+
if EVEN_N & EVEN_M:
|
116 |
+
if EVEN_HEADDIM:
|
117 |
+
v = tl.load(v_ptrs + start_n * stride_vn)
|
118 |
+
else:
|
119 |
+
v = tl.load(v_ptrs + start_n * stride_vn, mask=offs_d[None, :] < headdim, other=0.0)
|
120 |
+
elif EVEN_HEADDIM:
|
121 |
+
v = tl.load(v_ptrs + start_n * stride_vn, mask=(start_n + offs_n)[:, None] < seqlen_k, other=0.0)
|
122 |
+
else:
|
123 |
+
v = tl.load(v_ptrs + start_n * stride_vn, mask=((start_n + offs_n)[:, None] < seqlen_k) & (offs_d[None, :] < headdim), other=0.0)
|
124 |
+
p = p.to(v.dtype)
|
125 |
+
acc_o += tl.dot(p, v)
|
126 |
+
m_i = m_ij
|
127 |
+
l_i_new = tl.exp(lse_i - m_ij) + l_ij
|
128 |
+
lse_i = m_ij + tl.log(l_i_new)
|
129 |
+
o_scale = tl.exp(m_i - lse_i)
|
130 |
+
tl.store(t_ptrs, o_scale)
|
131 |
+
o_scale = tl.load(t_ptrs)
|
132 |
+
acc_o = acc_o * o_scale[:, None]
|
133 |
+
start_m = tl.program_id(0)
|
134 |
+
offs_m = start_m * BLOCK_M + tl.arange(0, BLOCK_M)
|
135 |
+
lse_ptrs = Lse + off_hb * seqlen_q_rounded + offs_m
|
136 |
+
tl.store(lse_ptrs, lse_i)
|
137 |
+
offs_d = tl.arange(0, BLOCK_HEADDIM)
|
138 |
+
out_ptrs = Out + off_b * stride_ob + off_h * stride_oh + (offs_m[:, None] * stride_om + offs_d[None, :])
|
139 |
+
if EVEN_M:
|
140 |
+
if EVEN_HEADDIM:
|
141 |
+
tl.store(out_ptrs, acc_o)
|
142 |
+
else:
|
143 |
+
tl.store(out_ptrs, acc_o, mask=offs_d[None, :] < headdim)
|
144 |
+
elif EVEN_HEADDIM:
|
145 |
+
tl.store(out_ptrs, acc_o, mask=offs_m[:, None] < seqlen_q)
|
146 |
+
else:
|
147 |
+
tl.store(out_ptrs, acc_o, mask=(offs_m[:, None] < seqlen_q) & (offs_d[None, :] < headdim))
|
148 |
+
|
149 |
+
@triton.jit
|
150 |
+
def _bwd_preprocess_do_o_dot(Out, DO, Delta, stride_ob, stride_oh, stride_om, stride_dob, stride_doh, stride_dom, nheads, seqlen_q, seqlen_q_rounded, headdim, BLOCK_M: tl.constexpr, BLOCK_HEADDIM: tl.constexpr):
|
151 |
+
start_m = tl.program_id(0)
|
152 |
+
off_hb = tl.program_id(1)
|
153 |
+
off_b = off_hb // nheads
|
154 |
+
off_h = off_hb % nheads
|
155 |
+
offs_m = start_m * BLOCK_M + tl.arange(0, BLOCK_M)
|
156 |
+
offs_d = tl.arange(0, BLOCK_HEADDIM)
|
157 |
+
o = tl.load(Out + off_b * stride_ob + off_h * stride_oh + offs_m[:, None] * stride_om + offs_d[None, :], mask=(offs_m[:, None] < seqlen_q) & (offs_d[None, :] < headdim), other=0.0).to(tl.float32)
|
158 |
+
do = tl.load(DO + off_b * stride_dob + off_h * stride_doh + offs_m[:, None] * stride_dom + offs_d[None, :], mask=(offs_m[:, None] < seqlen_q) & (offs_d[None, :] < headdim), other=0.0).to(tl.float32)
|
159 |
+
delta = tl.sum(o * do, axis=1)
|
160 |
+
tl.store(Delta + off_hb * seqlen_q_rounded + offs_m, delta)
|
161 |
+
|
162 |
+
@triton.jit
|
163 |
+
def _bwd_store_dk_dv(dk_ptrs, dv_ptrs, dk, dv, offs_n, offs_d, seqlen_k, headdim, EVEN_M: tl.constexpr, EVEN_N: tl.constexpr, EVEN_HEADDIM: tl.constexpr):
|
164 |
+
if EVEN_N & EVEN_M:
|
165 |
+
if EVEN_HEADDIM:
|
166 |
+
tl.store(dv_ptrs, dv)
|
167 |
+
tl.store(dk_ptrs, dk)
|
168 |
+
else:
|
169 |
+
tl.store(dv_ptrs, dv, mask=offs_d[None, :] < headdim)
|
170 |
+
tl.store(dk_ptrs, dk, mask=offs_d[None, :] < headdim)
|
171 |
+
elif EVEN_HEADDIM:
|
172 |
+
tl.store(dv_ptrs, dv, mask=offs_n[:, None] < seqlen_k)
|
173 |
+
tl.store(dk_ptrs, dk, mask=offs_n[:, None] < seqlen_k)
|
174 |
+
else:
|
175 |
+
tl.store(dv_ptrs, dv, mask=(offs_n[:, None] < seqlen_k) & (offs_d[None, :] < headdim))
|
176 |
+
tl.store(dk_ptrs, dk, mask=(offs_n[:, None] < seqlen_k) & (offs_d[None, :] < headdim))
|
177 |
+
|
178 |
+
@triton.jit
|
179 |
+
def _bwd_kernel_one_col_block(start_n, Q, K, V, Bias, DO, DQ, DK, DV, LSE, D, softmax_scale, stride_qm, stride_kn, stride_vn, stride_bm, stride_dom, stride_dqm, stride_dkn, stride_dvn, seqlen_q, seqlen_k, headdim, ATOMIC_ADD: tl.constexpr, BIAS_TYPE: tl.constexpr, IS_CAUSAL: tl.constexpr, BLOCK_HEADDIM: tl.constexpr, EVEN_M: tl.constexpr, EVEN_N: tl.constexpr, EVEN_HEADDIM: tl.constexpr, BLOCK_M: tl.constexpr, BLOCK_N: tl.constexpr):
|
180 |
+
begin_m = 0 if not IS_CAUSAL else start_n * BLOCK_N // BLOCK_M * BLOCK_M
|
181 |
+
offs_qm = begin_m + tl.arange(0, BLOCK_M)
|
182 |
+
offs_n = start_n * BLOCK_N + tl.arange(0, BLOCK_N)
|
183 |
+
offs_m = tl.arange(0, BLOCK_M)
|
184 |
+
offs_d = tl.arange(0, BLOCK_HEADDIM)
|
185 |
+
q_ptrs = Q + (offs_qm[:, None] * stride_qm + offs_d[None, :])
|
186 |
+
k_ptrs = K + (offs_n[:, None] * stride_kn + offs_d[None, :])
|
187 |
+
v_ptrs = V + (offs_n[:, None] * stride_vn + offs_d[None, :])
|
188 |
+
do_ptrs = DO + (offs_qm[:, None] * stride_dom + offs_d[None, :])
|
189 |
+
dq_ptrs = DQ + (offs_qm[:, None] * stride_dqm + offs_d[None, :])
|
190 |
+
if BIAS_TYPE == 'vector':
|
191 |
+
b_ptrs = Bias + offs_n
|
192 |
+
elif BIAS_TYPE == 'matrix':
|
193 |
+
b_ptrs = Bias + (offs_qm[:, None] * stride_bm + offs_n[None, :])
|
194 |
+
dv = tl.zeros([BLOCK_N, BLOCK_HEADDIM], dtype=tl.float32)
|
195 |
+
dk = tl.zeros([BLOCK_N, BLOCK_HEADDIM], dtype=tl.float32)
|
196 |
+
if begin_m >= seqlen_q:
|
197 |
+
dv_ptrs = DV + (offs_n[:, None] * stride_dvn + offs_d[None, :])
|
198 |
+
dk_ptrs = DK + (offs_n[:, None] * stride_dkn + offs_d[None, :])
|
199 |
+
_bwd_store_dk_dv(dk_ptrs, dv_ptrs, dk, dv, offs_n, offs_d, seqlen_k, headdim, EVEN_M=EVEN_M, EVEN_N=EVEN_N, EVEN_HEADDIM=EVEN_HEADDIM)
|
200 |
+
return
|
201 |
+
if EVEN_N & EVEN_M:
|
202 |
+
if EVEN_HEADDIM:
|
203 |
+
k = tl.load(k_ptrs)
|
204 |
+
v = tl.load(v_ptrs)
|
205 |
+
else:
|
206 |
+
k = tl.load(k_ptrs, mask=offs_d[None, :] < headdim, other=0.0)
|
207 |
+
v = tl.load(v_ptrs, mask=offs_d[None, :] < headdim, other=0.0)
|
208 |
+
elif EVEN_HEADDIM:
|
209 |
+
k = tl.load(k_ptrs, mask=offs_n[:, None] < seqlen_k, other=0.0)
|
210 |
+
v = tl.load(v_ptrs, mask=offs_n[:, None] < seqlen_k, other=0.0)
|
211 |
+
else:
|
212 |
+
k = tl.load(k_ptrs, mask=(offs_n[:, None] < seqlen_k) & (offs_d[None, :] < headdim), other=0.0)
|
213 |
+
v = tl.load(v_ptrs, mask=(offs_n[:, None] < seqlen_k) & (offs_d[None, :] < headdim), other=0.0)
|
214 |
+
num_block_m = tl.cdiv(seqlen_q, BLOCK_M)
|
215 |
+
for start_m in range(begin_m, num_block_m * BLOCK_M, BLOCK_M):
|
216 |
+
start_m = tl.multiple_of(start_m, BLOCK_M)
|
217 |
+
offs_m_curr = start_m + offs_m
|
218 |
+
if EVEN_M & EVEN_HEADDIM:
|
219 |
+
q = tl.load(q_ptrs)
|
220 |
+
elif EVEN_HEADDIM:
|
221 |
+
q = tl.load(q_ptrs, mask=offs_m_curr[:, None] < seqlen_q, other=0.0)
|
222 |
+
else:
|
223 |
+
q = tl.load(q_ptrs, mask=(offs_m_curr[:, None] < seqlen_q) & (offs_d[None, :] < headdim), other=0.0)
|
224 |
+
qk = tl.dot(q, k, trans_b=True)
|
225 |
+
if not EVEN_N:
|
226 |
+
qk = tl.where(offs_n[None, :] < seqlen_k, qk, float('-inf'))
|
227 |
+
if IS_CAUSAL:
|
228 |
+
qk = tl.where(offs_m_curr[:, None] >= offs_n[None, :], qk, float('-inf'))
|
229 |
+
if BIAS_TYPE != 'none':
|
230 |
+
tl.debug_barrier()
|
231 |
+
if BIAS_TYPE == 'vector':
|
232 |
+
if EVEN_N:
|
233 |
+
bias = tl.load(b_ptrs).to(tl.float32)
|
234 |
+
else:
|
235 |
+
bias = tl.load(b_ptrs, mask=offs_n < seqlen_k, other=0.0).to(tl.float32)
|
236 |
+
bias = bias[None, :]
|
237 |
+
elif BIAS_TYPE == 'matrix':
|
238 |
+
if EVEN_M & EVEN_N:
|
239 |
+
bias = tl.load(b_ptrs).to(tl.float32)
|
240 |
+
else:
|
241 |
+
bias = tl.load(b_ptrs, mask=(offs_m_curr[:, None] < seqlen_q) & (offs_n[None, :] < seqlen_k), other=0.0).to(tl.float32)
|
242 |
+
qk = qk * softmax_scale + bias
|
243 |
+
if not EVEN_M & EVEN_HEADDIM:
|
244 |
+
tl.debug_barrier()
|
245 |
+
lse_i = tl.load(LSE + offs_m_curr)
|
246 |
+
if BIAS_TYPE == 'none':
|
247 |